justincurl.bsky.social
@justincurl.bsky.social
Read more in our article published on Lawfare here: lawfaremedia.org/article/judg...

We're also planning to write a longer follow-on law review article, so share any thoughts or comments you might have! (10/10)
Judges Shouldn’t Rely on AI for the Ordinary Meaning of Text
Large language models are inherently shaped by private interests, making them unreliable arbiters of language.
lawfaremedia.org
May 22, 2025 at 5:45 PM
Most judges, we think, would be displeased to find their clerks taking instructions from OpenAI, regardless of whether they had shown explicit bias towards the company. (9/10)
May 22, 2025 at 5:45 PM
Some analogize LLMs to law clerks (which few people take serious issue with). But while clerks are vetted and employed by judges, commercial LLMs are fully controlled by the companies that create them. (8/10)
May 22, 2025 at 5:45 PM
What matters here is NOT the specific values chosen but that companies are selecting and enshrining values into their models at all.

Judges are supposed to interpret the law. But by consulting LLMs, they're effectively letting third parties help decide what the law means. (7/10)
May 22, 2025 at 5:45 PM
2. Anthropic’s early models were trained to follow the principles it selected (Constitutional AI).

3. When asked for example laws that could help guide regulation of tech companies, o3 refused to respond to queries mentioning OpenAI yet offered suggestions for Anthropic. (6/10)
May 22, 2025 at 5:45 PM
LLMs are built, prompted, fine-tuned, and filtered by private companies with their own agendas. For example…

1. DeepSeek refuses to answer questions related to sensitive topics in China. (5/10)
May 22, 2025 at 5:45 PM
Because LLMs are trained on billions of pages of text, some judges have viewed asking an LLM as a clever shortcut for finding a word's everyday meaning. But there's a catch: LLMs aren't neutral observers of language. (4/10)
May 22, 2025 at 5:45 PM
Why are judges consulting LLMs?

First, context: to resolve many cases, judges must decide the meaning of key words and phrases. In modern textual interpretation, words are given their “ordinary meaning,” and essentially mean whatever the average person thinks they mean (3/10)
May 22, 2025 at 5:45 PM
Yes, this is happening: An 11th Circuit federal judge asked LLMs to see if “landscaping” covers putting in a backyard trampoline and if threatening someone at gunpoint is “physical restraint.”

And he’s not alone. Judges across the country are citing AI in their opinions. (2/10)
May 22, 2025 at 5:45 PM