Elliott Thornley
elliottthornley.bsky.social
Elliott Thornley
@elliottthornley.bsky.social
Research Fellow at Oxford University's Global Priorities Institute.

Working on the philosophy of AI.
Recent article on the POST-Agents Proposal!
Shutdownable Agents through POST-Agency — LessWrong
Summary * Future artificial agents might resist shutdown. * I present an idea – the POST-Agents Proposal – for ensuring that doesn’t happen. * I p…
www.lesswrong.com
September 17, 2025 at 2:27 PM
Reposted by Elliott Thornley
A new working paper, "Shutdownable Agents through POST-Agency" by Elliott Thornley, is now available on our website. Read it here:
globalprioritiesinstitute.org/thornley-shu...
Shutdownable Agents through POST-Agency - Elliott Thornley
Many fear that future artificial agents will resist shutdown. I present an idea – the POST-Agents Proposal – for ensuring that doesn’t happen. I propose that we train agents to satisfy Preferences Onl...
globalprioritiesinstitute.org
June 2, 2025 at 1:43 PM
'Where are you?' seems like a pretty normal question, but for 99.99% of human history it basically never made sense to ask it.
May 14, 2025 at 3:56 PM
Our poster for TAIS 2025
April 8, 2025 at 6:55 AM
A gif we made summarizing our 'Towards shutdownable agents' paper for TAIS 2025.
April 8, 2025 at 6:52 AM
Gave a talk about the shutdown problem at the new Singapore AI Safety Hub!
March 24, 2025 at 6:01 PM
I've got a new paper out open-access in AJP! It’s about critical-level and critical-range views in population axiology, and why I think they’re troubled by questions of identity between lives.

www.tandfonline.com/doi/full/10....
Critical-Set Views, Biographical Identity, and the Long Term
Critical-set views avoid the Repugnant Conclusion by subtracting some constant from the welfare score of each life in a population. These views are thus sensitive to facts about biographical identi...
www.tandfonline.com
March 13, 2025 at 9:17 AM
Progress in AI has been rapid in recent years. By contrast, progress in 'opening sentences of papers about AI' has completely stalled.
December 4, 2024 at 4:02 PM
Reposted by Elliott Thornley
"Sure, the last 1000 grad students failed to solve the problem of induction, but that's no reason to think I can't do it."
November 26, 2024 at 9:16 PM
Reposted by Elliott Thornley
We’re excited to announce our new research agendas – for philosophy, economics and psychology – have now been published! You can read them here: globalprioritiesinstitute.org/research-age...
Research agenda - Global Priorities Institute
The central focus of GPI is what we call ‘global priorities research’: research into issues that arise in response to the question, ‘What should we do with a given amount of limited resources if our a...
globalprioritiesinstitute.org
November 29, 2024 at 11:00 AM
[Pasting over an old Twitter thread about this post.]
The Shutdown Problem: Incomplete Preferences as a Solution — AI Alignment Forum
Preamble This post is an updated explanation of the Incomplete Preferences Proposal (IPP): my proposed solution to the shutdown problem. The post is…
www.alignmentforum.org
November 26, 2024 at 10:55 AM
The introduction to my PhD thesis
[You can read it as a PDF here.]
openairopensea.substack.com
November 22, 2024 at 1:36 PM
Minor updates to an old post!

openairopensea.substack.com/p/my-favouri...
My favourite arguments against person-affecting views
1.
openairopensea.substack.com
November 19, 2024 at 3:40 PM
Paper! arxiv.org/pdf/2407.00805

With Alex Roman, Christos Ziakas, Leyton Ho, and Louis Thomson.

Quick thread explaining it.
arxiv.org
November 18, 2024 at 4:48 PM
November 17, 2024 at 8:44 PM
Oh no
November 12, 2024 at 10:14 PM