Peter Henderson
@peterhenderson.bsky.social
Assistant Professor the Polaris Lab @ Princeton (https://www.polarislab.org/); Researching: RL, Strategic Decision-Making+Exploration; AI+Law
We’ve been pushing hard on AI for public good. One example: partnering with Courtlistener to launch accessible legal semantic search! Many more cool AI projects coming soon from my group aimed at improving access to justice, often spearheaded by @dominsta.bsky.social !
November 7, 2025 at 2:15 AM
We’ve been pushing hard on AI for public good. One example: partnering with Courtlistener to launch accessible legal semantic search! Many more cool AI projects coming soon from my group aimed at improving access to justice, often spearheaded by @dominsta.bsky.social !
Sora2 is speedrunning my AI law class. We covered issues with copyrighted characters in week 2, and right of publicity claims in week 3. Georgia has a postmortem right of publicity claim. Some states don't (e.g., famous Marilyn Monroe estate battle).
October 17, 2025 at 8:06 PM
Sora2 is speedrunning my AI law class. We covered issues with copyrighted characters in week 2, and right of publicity claims in week 3. Georgia has a postmortem right of publicity claim. Some states don't (e.g., famous Marilyn Monroe estate battle).
October 16, 2025 at 9:14 PM
Why might AI companies take on larger copyright litigation risks? If they estimate AGI-scale impacts are 2-3 yrs out, litigation will lag that long. By then, the bet might be: govts step in (too big to fail), rightsholders reliant on AI, fair use prevails, or have $$$ to settle.
October 1, 2025 at 9:56 PM
Why might AI companies take on larger copyright litigation risks? If they estimate AGI-scale impacts are 2-3 yrs out, litigation will lag that long. By then, the bet might be: govts step in (too big to fail), rightsholders reliant on AI, fair use prevails, or have $$$ to settle.
Quick take: Are open-weight AI models getting a fair shake in evals? A few thoughts on comparing systems-to-models, sparked by Anthropic’s recent postmortem.
Check it our most recent post: www.ailawpolicy.com/p/quick-take...
Check it our most recent post: www.ailawpolicy.com/p/quick-take...
September 24, 2025 at 3:15 PM
Quick take: Are open-weight AI models getting a fair shake in evals? A few thoughts on comparing systems-to-models, sparked by Anthropic’s recent postmortem.
Check it our most recent post: www.ailawpolicy.com/p/quick-take...
Check it our most recent post: www.ailawpolicy.com/p/quick-take...
GPT-5-codex just ``git reset --hard'' ongoing changes in a repo, saying "I panicked!"
h/t Zeyu Shen @ Princeton
h/t Zeyu Shen @ Princeton
September 23, 2025 at 6:34 PM
GPT-5-codex just ``git reset --hard'' ongoing changes in a repo, saying "I panicked!"
h/t Zeyu Shen @ Princeton
h/t Zeyu Shen @ Princeton
☢️ Can an AI model be "born secret" when it comes to nuclear and radiological risks? What powers does the Atomic Energy Act give the federal government over frontier models?
It might be more than you think! And may preempt parts of state regs. Check out our post: www.ailawpolicy.com/p/ai-born-se...
It might be more than you think! And may preempt parts of state regs. Check out our post: www.ailawpolicy.com/p/ai-born-se...
AI "Born Secret"? The Atomic Energy Act, AI, and Federalism
A law & policy deep dive.
www.ailawpolicy.com
September 17, 2025 at 3:30 PM
☢️ Can an AI model be "born secret" when it comes to nuclear and radiological risks? What powers does the Atomic Energy Act give the federal government over frontier models?
It might be more than you think! And may preempt parts of state regs. Check out our post: www.ailawpolicy.com/p/ai-born-se...
It might be more than you think! And may preempt parts of state regs. Check out our post: www.ailawpolicy.com/p/ai-born-se...
Some quick thoughts on the recent copyright litigation developments:
"Anthropic Settles Its Copyright Litigation—and Why That Was the Right Move"
🔗 www.ailawpolicy.com/p/anthropic-...
"Anthropic Settles Its Copyright Litigation—and Why That Was the Right Move"
🔗 www.ailawpolicy.com/p/anthropic-...
Anthropic Settles Its Copyright Litigation—and Why That Was the Right Move
As well as what it means for the broader landscape of litigation.
www.ailawpolicy.com
September 12, 2025 at 4:32 PM
Some quick thoughts on the recent copyright litigation developments:
"Anthropic Settles Its Copyright Litigation—and Why That Was the Right Move"
🔗 www.ailawpolicy.com/p/anthropic-...
"Anthropic Settles Its Copyright Litigation—and Why That Was the Right Move"
🔗 www.ailawpolicy.com/p/anthropic-...
Annnnnndddd Judge Alsup just rejected the settlement. Still some time to fix it. Rejection was mostly on the grounds that the class was under-specified (no final list of works, no opt-out/notification mechanism solidified).
news.bloomberglaw.com/ip-law/anthr...
news.bloomberglaw.com/ip-law/anthr...
September 8, 2025 at 11:48 PM
Annnnnndddd Judge Alsup just rejected the settlement. Still some time to fix it. Rejection was mostly on the grounds that the class was under-specified (no final list of works, no opt-out/notification mechanism solidified).
news.bloomberglaw.com/ip-law/anthr...
news.bloomberglaw.com/ip-law/anthr...
Reposted by Peter Henderson
💡New on the CITP Blog: "Statutory Construction & Interpretation for AI" > What if an LLM concludes a user's behavior is “egregiously immoral" -- & contacts authorities?
CITP researchers with Prof @peterhenderson.bsky.social's
POLARIS Lab provide a possible explanation.🔗👇
CITP researchers with Prof @peterhenderson.bsky.social's
POLARIS Lab provide a possible explanation.🔗👇
Statutory Construction & Interpretation for AI - CITP Blog
Blogpost authors: Nimra Nadeem, Lucy He, Michel Liao, and Peter Henderson Paper authors: Lucy He, Nimra Nadeem, Michel Liao, Howard Chen, Danqi Chen, Mariano-Florentino Cuéllar, Peter Henderson A long...
blog.citp.princeton.edu
September 5, 2025 at 9:16 PM
💡New on the CITP Blog: "Statutory Construction & Interpretation for AI" > What if an LLM concludes a user's behavior is “egregiously immoral" -- & contacts authorities?
CITP researchers with Prof @peterhenderson.bsky.social's
POLARIS Lab provide a possible explanation.🔗👇
CITP researchers with Prof @peterhenderson.bsky.social's
POLARIS Lab provide a possible explanation.🔗👇
The terms of Anthropic's settlement w/book authors just came out.
💰$1.5B to authors in libgen (Books3 corpus)!
Interestingly, this is ~$3k per book, close to the terms that HarperCollins allegedly gave to authors for their books ($2.5k). Consensus price forming?
💰$1.5B to authors in libgen (Books3 corpus)!
Interestingly, this is ~$3k per book, close to the terms that HarperCollins allegedly gave to authors for their books ($2.5k). Consensus price forming?
September 5, 2025 at 7:59 PM
The terms of Anthropic's settlement w/book authors just came out.
💰$1.5B to authors in libgen (Books3 corpus)!
Interestingly, this is ~$3k per book, close to the terms that HarperCollins allegedly gave to authors for their books ($2.5k). Consensus price forming?
💰$1.5B to authors in libgen (Books3 corpus)!
Interestingly, this is ~$3k per book, close to the terms that HarperCollins allegedly gave to authors for their books ($2.5k). Consensus price forming?
Wonder why Claude decided to report users to the authorities? It might be because its constitution says Claude should choose responses in the long-term interest of humanity!
But what if we could leverage computational and legal tools to "debug" or "lint" AI rules/laws for ambiguity?
🧵!
But what if we could leverage computational and legal tools to "debug" or "lint" AI rules/laws for ambiguity?
🧵!
September 5, 2025 at 1:57 PM
Wonder why Claude decided to report users to the authorities? It might be because its constitution says Claude should choose responses in the long-term interest of humanity!
But what if we could leverage computational and legal tools to "debug" or "lint" AI rules/laws for ambiguity?
🧵!
But what if we could leverage computational and legal tools to "debug" or "lint" AI rules/laws for ambiguity?
🧵!
Excited to offer my AI Law class again @ Princeton this year. We'll be sharing lecture notes/materials and more this year on the course webpage! Imo, we have a unique offering that emphasizes how the technical details affect legal outcomes. Check it out!
www.polarislab.org/ai-law-2025/...
www.polarislab.org/ai-law-2025/...
September 4, 2025 at 11:25 PM
Excited to offer my AI Law class again @ Princeton this year. We'll be sharing lecture notes/materials and more this year on the course webpage! Imo, we have a unique offering that emphasizes how the technical details affect legal outcomes. Check it out!
www.polarislab.org/ai-law-2025/...
www.polarislab.org/ai-law-2025/...
I'm starting to get emails about PhDs for next year. I'm always looking for great people to join!
For next year, I'm looking for people with a strong reinforcement learning, game theory, or strategic decision-making background...
For next year, I'm looking for people with a strong reinforcement learning, game theory, or strategic decision-making background...
August 28, 2025 at 5:48 PM
I'm starting to get emails about PhDs for next year. I'm always looking for great people to join!
For next year, I'm looking for people with a strong reinforcement learning, game theory, or strategic decision-making background...
For next year, I'm looking for people with a strong reinforcement learning, game theory, or strategic decision-making background...
Reposted by Peter Henderson
A California teen sought advice from OpenAI's GPT-4o on how to end his life. The chatbot gave him explicit instructions and encouragement. His parents are suing the company and its CEO, Sam Altman, alleging “it was the predictable result of deliberate design choices."
Breaking Down the Lawsuit Against OpenAI Over Teen's Suicide | TechPolicy.Press
A California teen sought advice from OpenAI's GPT-4o on how to end his life. His parents are suing the company and its CEO.
www.techpolicy.press
August 27, 2025 at 11:35 AM
A California teen sought advice from OpenAI's GPT-4o on how to end his life. The chatbot gave him explicit instructions and encouragement. His parents are suing the company and its CEO, Sam Altman, alleging “it was the predictable result of deliberate design choices."
Anthropic settled with authors in its ongoing litigation! Given the increasing likelihood of a messy trial, this was probably the best move. AI companies may have to be more strategic about which cases help set precedent in this area. Curious to see the terms..
news.bloomberglaw.com/class-action...
news.bloomberglaw.com/class-action...
Anthropic Settles Major AI Copyright Suit Brought by Authors (1)
Anthropic PBC reached a settlement with authors in a high-stakes copyright class action that threatened the AI company with potentially billions of dollars in damages.
news.bloomberglaw.com
August 26, 2025 at 5:23 PM
Anthropic settled with authors in its ongoing litigation! Given the increasing likelihood of a messy trial, this was probably the best move. AI companies may have to be more strategic about which cases help set precedent in this area. Curious to see the terms..
news.bloomberglaw.com/class-action...
news.bloomberglaw.com/class-action...
New paper suggests that if firms aren’t seeing growth from AI, it could be because current deployments replace existing labor, instead of scaling output. AI policy and governance agenda for 2025+ needs to put labor at the forefront.
digitaleconomy.stanford.edu/publications...
digitaleconomy.stanford.edu/publications...
August 26, 2025 at 2:30 PM
New paper suggests that if firms aren’t seeing growth from AI, it could be because current deployments replace existing labor, instead of scaling output. AI policy and governance agenda for 2025+ needs to put labor at the forefront.
digitaleconomy.stanford.edu/publications...
digitaleconomy.stanford.edu/publications...
Glad to see Google still working on efficiency-and transparency-of the energy impacts of their models!
AI efficiency is important. The median Gemini Apps text prompt in May 2025 used 0.24 Wh of energy (<9 seconds of TV watching) & 0.26 mL (~5 drops) of water. Over 12 months, we reduced the energy footprint of a median text prompt 33x, while improving quality:
cloud.google.com/blog/product...
cloud.google.com/blog/product...
August 21, 2025 at 10:21 PM
Glad to see Google still working on efficiency-and transparency-of the energy impacts of their models!
AI-generated errors in an Australian murder case. We'll probably see an influx of ineffective assistance of counsel petitions/appeals soon arguing AI-usage.
apnews.com/article/aust...
apnews.com/article/aust...
August 21, 2025 at 1:46 PM
AI-generated errors in an Australian murder case. We'll probably see an influx of ineffective assistance of counsel petitions/appeals soon arguing AI-usage.
apnews.com/article/aust...
apnews.com/article/aust...
DOGE still exists, and continues to use Google's Gemini and other AI tools for identifying government regulations to slash.
"The tool will further categorize [those] submitting comments, such as whether they [are a] 'sophisticated' corporate commenter."
👀
www.wired.com/story/sweetr...
"The tool will further categorize [those] submitting comments, such as whether they [are a] 'sophisticated' corporate commenter."
👀
www.wired.com/story/sweetr...
A DOGE AI Tool Called SweetREX Is Coming to Slash US Government Regulation
Named for its developer, an undergrad who took leave from UChicago to become a DOGE affiliate, a new AI tool automates the review of federal regulations and flags rules it thinks can be eliminated.
www.wired.com
August 21, 2025 at 12:27 AM
DOGE still exists, and continues to use Google's Gemini and other AI tools for identifying government regulations to slash.
"The tool will further categorize [those] submitting comments, such as whether they [are a] 'sophisticated' corporate commenter."
👀
www.wired.com/story/sweetr...
"The tool will further categorize [those] submitting comments, such as whether they [are a] 'sophisticated' corporate commenter."
👀
www.wired.com/story/sweetr...
New work from Hartline, Hu & Wu: is there a truthful calibration metric in sequential settings (i.e., better than ECE)? Seems like the answer is yes! Super important research direction as we think about multi-step uncertainty estimation from agents in high stakes settings.
August 20, 2025 at 6:13 PM
New work from Hartline, Hu & Wu: is there a truthful calibration metric in sequential settings (i.e., better than ECE)? Seems like the answer is yes! Super important research direction as we think about multi-step uncertainty estimation from agents in high stakes settings.
August 4, 2025 at 5:21 PM
Reposted by Peter Henderson
Tesla partly liable in Florida Autopilot trial, jury awards $200M punitive damages
Tesla partly liable in Florida Autopilot trial, jury awards $200M punitive damages | TechCrunch
It's one of the first major legal decisions about driver assistance technology that has gone against Tesla.
techcrunch.com
August 1, 2025 at 6:26 PM
Tesla partly liable in Florida Autopilot trial, jury awards $200M punitive damages
Check out our new blogpost and policy brief on our recently updated lab website!
❓Are we actually capturing the bubble of risk for cybersecurity evals? Not really! Adversaries can modify agents by a small amount and get massive gains.
❓Are we actually capturing the bubble of risk for cybersecurity evals? Not really! Adversaries can modify agents by a small amount and get massive gains.
July 14, 2025 at 10:22 PM
Check out our new blogpost and policy brief on our recently updated lab website!
❓Are we actually capturing the bubble of risk for cybersecurity evals? Not really! Adversaries can modify agents by a small amount and get massive gains.
❓Are we actually capturing the bubble of risk for cybersecurity evals? Not really! Adversaries can modify agents by a small amount and get massive gains.
We're up 216 tracked cases of bogus citations in court worldwide, including this case!
www.polarislab.org/ai-law-track...
www.polarislab.org/ai-law-track...
July 4, 2025 at 10:27 PM
We're up 216 tracked cases of bogus citations in court worldwide, including this case!
www.polarislab.org/ai-law-track...
www.polarislab.org/ai-law-track...