Forethought
banner
forethought-org.bsky.social
Forethought
@forethought-org.bsky.social
Research nonprofit exploring how to navigate explosive AI progress. forethought.org
Wondering whether to apply to our open roles? Research Fellow Mia Taylor joined 5 weeks ago. We just released a new episode of ForeCast, hearing from her about why she joined, what it's like to work here, and who the work is likely (and unlikely) to suit.

pnc.st/s/forecast/...
Forethought is Hiring Researchers (with Mia Taylor)
This is a bonus episode to say that Forethought is hiring researchers. After an overview of the roles, we hear from Research Fellow Mia Taylor about working at Forethought. The application deadline has been extended to November 1st 2025. Apply here: fore
pnc.st
October 14, 2025 at 7:42 AM
We’re hiring!

Society isn’t prepared for a world with superhuman AI. If you want to help, consider applying to one of our research roles:
forethought.org/careers/res...

Not sure if you’re a good fit? See more in the reply (or just apply — it doesn’t take long)
October 13, 2025 at 8:14 AM
What might happen to society and politics after widespread automation? What are the best ideas for good post-AGI futures, if any?

David Duvenaud joins the podcast —

pnc.st/s/forecast/...
Politics and Power Post-Automation (with David Duvenaud)
David Duvenaud is an associate professor at the University of Toronto. He recently organised the workshop on ‘Post-AGI Civilizational Equilibria’ , and he is a co-author of ‘Gradual Disempowerment’. He recently finished an extended sabbatical on the Alignm
pnc.st
September 25, 2025 at 11:51 AM
How could humans lose control over the future, even if AIs don't coordinate to seek power? What can we do about that?

Raymond Douglas joins the podcast to discuss “Gradual Disempowerment”

Listen: pnc.st/s/forecast/...
Is Gradual Disempowerment Inevitable? (with Raymond Douglas)
Raymond Douglas is a researcher focused on the societal effects of AI. In this episode, we discuss Gradual Disempowerment. To see all our published research, visit forethought.org/research. To subscribe to our newsletter, visit forethought.org/subscribe.
pnc.st
September 9, 2025 at 11:25 AM
Should AI agents obey human laws?

Cullen O’Keefe (Institute for Law & AI) joins the podcast to discuss “law-following AI”.

Listen: pnc.st/s/forecast/...
Should AI Agents Obey Human Laws? (with Cullen O'Keefe)
Cullen O'Keefe is Director of Research at the Institute for Law & AI. In this episode, we discuss 'Law-Following AI: designing AI agents to obey human laws'. To see all our published research, visit forethought.org/research. To subscribe to our ne
pnc.st
August 28, 2025 at 10:30 AM
The ‘Better Futures’ series compares the value of working on ‘survival’ and ‘flourishing’.

In ‘The Basic Case for Better Futures’, Will MacAskill and Philip Trammell describe a more formal way to model the future in those terms.
August 28, 2025 at 8:31 AM
We're starting to post narrations of Forethought articles on our podcast feed, for people who’d prefer to listen to them.

First up is ‘AI-Enabled Coups: How a Small Group Could Use AI to Seize Power’.
August 26, 2025 at 10:58 AM
In the fifth essay in the ‘Better Futures’ series, Will MacAskill asks what, concretely, we could do to improve the value of the future (conditional on survival).

Read it here: www.forethought.org/research/ho...
How to Make the Future Better: Concrete Actions for Flourishing
Forethought outlines concrete actions for better futures: prevent post-AGI autocracy, improve AI governance.
www.forethought.org
August 26, 2025 at 9:04 AM
One reason to think the coming century could be pivotal is that humanity might soon race through a big fraction of what's still unexplored of the eventual tech tree.

From the podcast on ‘Better Futures’ —
August 24, 2025 at 1:11 PM
The fourth entry in the ‘Better Futures’ series asks whether the effects of our actions today inevitably ‘wash out’ over long time horizons, aside from extinction. Will MacAskill argues against that view.

Read it here: www.forethought.org/research/pe...
Persistent Path-Dependence: Why Our Actions Matter Long-Term
Forethought argues against the "wash out" objection: AGI-enforced institutions enable persistent impact.
www.forethought.org
August 22, 2025 at 9:01 AM
What is the difference between “survival” and “flourishing”?

Will MacAskill on the better futures model, from our first video podcast:
August 21, 2025 at 1:20 PM
New podcast episode with Peter Salib and Simon Goldstein on their article ‘AI Rights for Human Safety’.

pnc.st/s/forecast/...
AI Rights for Human Safety (with Peter Salib and Simon Goldstein)
Peter Salib is an assistant professor of law at the University of Houston, and Simon Goldstein is an associate professor of philosophy at the University of Hong Kong. We discuss their paper ‘AI Rights for Human Safety’. To see all our published research,
pnc.st
July 9, 2025 at 7:18 PM
New podcast episode with @tobyord.bsky.social — on inference scaling, time horizons for AI agents, lessons from scientific moratoria, and more.

pnc.st/s/forecast/...
Inference Scaling, AI Agents, and Moratoria (with Toby Ord)
Toby Ord is a Senior Researcher at Oxford University. We discuss the ‘scaling paradox’, inference scaling and its implications, ways to interpret trends in the length of tasks AI agents can complete, and some unpublished thoughts on lessons from scientifi
pnc.st
June 16, 2025 at 10:36 AM
New report: “Will AI R&D Automation Cause a Software Intelligence Explosion?” 

As AI R&D is automated, AI progress may dramatically accelerate. Skeptics counter that hardware stock can only grow so fast. But what if software advances alone can sustain acceleration?

x.com/daniel_2718...
March 26, 2025 at 6:27 PM
Two years ago, AI systems were close to random guessing at PhD-level science questions. Now they beat human experts. As they continue to become smarter and more agentic, they may begin to significantly accelerate technological development. What happens next?
March 11, 2025 at 3:35 PM