Danny Wilf-Townsend
banner
dannywt.bsky.social
Danny Wilf-Townsend
@dannywt.bsky.social
Associate Professor of Law at Georgetown Law thinking, writing, and teaching about civil procedure, consumer protection, and AI.

Blog: https://www.wilftownsend.net/

Academic papers: https://papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=2491047
Overall, GPT-5-Pro was good enough to use for my (informal) approach here—it was both internally consistent and looked good in accuracy spot checks. Its grades show some models scoring in the A- to A range, consistent with what others have found, too.
October 1, 2025 at 1:58 PM
It turns out that some models are deeply inaccurate, and some are frequently inconsistent, but a few are reasonably consistent and accurate. And along the way, I learned that human graders are sometimes less consistent than we might hope.
October 1, 2025 at 1:58 PM
I think this is one of the more common mistakes I see with people trying AI—the idea that if you go to a free chatbot, quickly run a question by it, and it does a bad job, then you've learned that AI cannot do a good job on that question.
June 17, 2025 at 1:45 PM
The use of generated legal texts raises familiar issues in the world of AI (e.g. bias, inaccuracy) but also some distinct concerns (insincerity, a flood of documents). Check out our paper here: papers.ssrn.com/sol3/papers....
May 12, 2025 at 1:02 PM
I've got a new paper up with the inimitable @kevintobia.bsky.social: "Generated Legal Texts"—about texts generated by AI and used in legal institutions. These texts are arising frequently in legal contexts around the world, perhaps faster than many realize. And, we argue ...
May 12, 2025 at 1:02 PM
This blog post also contains my favorite AI+Law anecdote so far from the teaching-and-talks circuit.
April 22, 2025 at 3:33 PM
I haven't spent much time with the new o3 model from OpenAI, but it is the first model to get all of the questions right on the informal testing of legal questions that I've been doing for a while when new models come out.
April 17, 2025 at 3:27 PM
Several good things about this order. A small, technical detail that is good but easy to miss is the court's ruling on FRCP 65(c). In other times this would be unremarkable (or just absent from the order), but the Trump admin has tried to use Rule 65 to make this kind of civil action harder.
April 9, 2025 at 4:39 PM
A mandatory arbitration agreement that requires subjecting disputes to an AI system. What could go wrong?
March 20, 2025 at 4:57 PM
And here is a slightly different kind of problem, from another almost-a-real-case hypothetical involving copyright and disgorgement: 7/
March 11, 2025 at 3:20 PM
And one from the consumer finance context involving a customer chatbot: 6/
March 11, 2025 at 3:20 PM
I give a few examples in the paper from different contexts, and discuss a few different problems and implications. Here's a hypothetical from the antidiscrimination context, inspired by a real case involving a content moderation tool: 5/
March 11, 2025 at 3:20 PM
In the growing world of AI litigation, class actions are going to be particularly important. In a new paper forthcoming in
the Wash. U. law review, I look at how class actions will influence the efficacy of AI regulations, and how they might even open up some new options for regulators. 🧵
March 11, 2025 at 3:17 PM