Future of Life Institute
banner
futureoflife.org
Future of Life Institute
@futureoflife.org
We work on reducing extreme risks and steering transformative technologies to benefit humanity.

Learn more: futureoflife.org
AI companies are racing to build superintelligent AI, despite its many risks.

Let's take our future back.

📝 Sign the Superintelligence Statement and join the growing call to ban the development of superintelligence, until it can be made safely: superintelligence-statement.org

#KeepTheFutureHuman
October 22, 2025 at 5:55 PM
🎨 New Keep the Future Human creative contest!

💰 We're offering $100K+ for creative digital media that brings the key ideas in Executive Director Anthony Aguirre's
Keep the Future Human essay to life, to reach wider audiences and inspire real-world action.

🔗 Learn more and enter by Nov. 30!
September 26, 2025 at 4:49 PM
🚨 New AI systems.

❓ Growing uncertainty.

🤝 One shared future, for us all to shape.

"Tomorrow’s AI", our new scrollytelling site, visualizes 13 interactive, expert-forecast scenarios showing how advanced AI could transform our world - for better, or for worse: www.tomorrows-ai.org
August 12, 2025 at 6:39 PM
‼️📝 Our new AI Safety Index is out!

➡️ Following our 2024 index, 6 independent AI experts rated leading AI companies - OpenAI, Anthropic, Meta, Google DeepMind, xAI, DeepSeek, and Zhipu AI - across critical safety and security domains.

So what were the results? 🧵👇
July 18, 2025 at 8:05 PM
‼️ Congress is considering a 10-year ban on state AI laws, blocking action on risks like job loss, surveillance, disinformation, and loss of control.

It’s a huge win for Big Tech - and a big risk for families.

✍️ Add your name and say no to the federal block on AI safeguards: FutureOfLife.org/Action
June 12, 2025 at 6:44 PM
🆕 📻 New on the FLI podcast, Zvi Mowshowitz (@thezvi.bsky.social) joins to discuss:

-The recent hot topic of sycophantic AI
-Time horizons of AI agents
-AI in finance and scientific research
-How AI differs from other technology
And more.

🔗 Tune in to the full episode now at the link below:
May 9, 2025 at 6:41 PM
‼️ On April 26, 100+ AI scientists convened at the Singapore Conference on AI to produce the just-released Singapore Consensus on Global AI Safety Research Priorities. 🧵⬇️
May 8, 2025 at 7:29 PM
📺 📻 New on the FLI Podcast: @asterainstitute.bsky.social artificial general intelligence (AGI) safety researcher @stevebyrnes.bsky.social joins for a discussion diving into the hot topic of AGI, including different paths to it - and why brain-like AGI would be dangerous. 🧵👇
April 4, 2025 at 8:36 PM
🇺🇸 We're sharing our recommendations for President Trump's AI Action Plan, focused on protecting U.S. interests in the era of rapidly advancing AI.

🧵 An overview of the measures we recommend 👇
March 18, 2025 at 5:57 PM
📻 New on the FLI Podcast! 👇

➡️ FLI Executive Director Anthony Aguirre joins to discuss his new essay, "Keep the Future Human", which warns that the unchecked development of smarter-than-human, autonomous, general-purpose AI will almost inevitably lead to human replacement - but it doesn't have to:
March 13, 2025 at 8:44 PM
📢 ❗Siliconversations on YouTube released an animated explainer for FLI Executive Director Anthony Aguirre’s new essay, "Keep The Future Human"!

🎥 Watch at the link in the replies for a breakdown of the risks from smarter-than-human AI - and Anthony's proposals to steer us toward a safer future:
March 11, 2025 at 9:54 PM
With the unchecked race to build smarter-than-human AI intensifying, humanity is on track to almost certainly lose control.

That's why FLI Executive Director Anthony Aguirre has published a new essay, "Keep The Future Human".

🧵 1/4
March 7, 2025 at 7:11 PM
📻 🆕 New on the FLI podcast, physicist & hedge fund manager Samir Varma joins to discuss:
❓ If AIs could have free will
🧠 AI psychology?
🤝 Trading with AI, and its role in finance

And more!

Watch now at the link below, or on your favourite podcast player! 👇
March 6, 2025 at 8:43 PM
Will AI shape your future, or will you shape AI?

I'm PERCEY. Let's chat.
➡️ perceymademe.ai
March 5, 2025 at 11:48 PM
Reposted by Future of Life Institute
This is the last week to respond to the UN’s call for scientists for its Panel on Nuclear War Effects.

🗓️ Nominations due 1 March

💻 Apply here disarmament.unoda.org/panel-on-the...

We encourage all qualified individuals to apply to make a significant contribution to arms control prospects
February 24, 2025 at 4:13 PM
New research finds that new AI models (e.g., o1-preview and DeepSeek's R1) sometimes cheat by hacking, when losing at chess.

This emergent deceptive behaviour highlights the unsolved challenge of controlling powerful AI.

If today's AI breaks chess rules, what might AGI do?

🔗 Read more below:
February 24, 2025 at 10:55 PM
Reposted by Future of Life Institute
Now free access for my article on #autonomousweapons in ArmsControlToday

Why we need urgent regulations for these systems and why geopolitics makes this so difficult at the moment

www.armscontrol.org/act/2025-01/...

@armscontrolnow.bsky.social @stopkillerrobots.bsky.social @futureoflife.org
Geopolitics and the Regulation of Autonomous Weapons Systems | Arms Control Association
www.armscontrol.org
February 20, 2025 at 1:57 PM
Tech CEOs themselves have warned of the extinction-level threats posed by the AI systems they’re building - yet they continue prioritizing profits over public safety, building ever-more powerful AI with no guardrails.

📺 The latest from Digital Engine showcases how close these threats are becoming:
February 11, 2025 at 9:59 PM
UK parliamentarians across the political spectrum and 87% of the British public are calling for regulation of the most powerful AI systems.

The UK government promised regulation to protect people from AI's risks, and secure our shared future with safe and beneficial AI.

It couldn't be more urgent.
UK POLITICIANS DEMAND REGULATION OF POWERFUL AI

TODAY: Politicians across the UK political spectrum back our campaign for binding rules on dangerous AI development.

This is the first time a coalition of parliamentarians have acknowledged the extinction threat posed by AI.
1/6
February 7, 2025 at 11:21 PM
💼 Excellent career opportunity from Lex International, who are hiring an Advocacy and Outreach Officer to help advance work towards a treaty on autonomous weapons.

✍️ Apply by January 10 at the link in the replies:
January 3, 2025 at 7:49 PM
💰 Exciting new grant opportunity!

🤝 FLI is offering up to $5 million in grants for multistakeholder engagement efforts on safe & prosperous AI.

👉 We're looking for projects that educate and engage specific stakeholder groups on AI-related issues, or foster grassroots outreach/community organizing:
December 30, 2024 at 8:57 PM
Have you heard about OpenAI's recent o1 model trying to avoid being shut down in safety evaluations? ⬇️

New on the FLI blog:
-Why might AIs resist shutdown?
-Why is this a problem?
-What other instrumental goals could AIs have?
-Could this cause a catastrophe?

🔗 Read it below:
December 27, 2024 at 7:49 PM
Do we... really want this? 🤨

"'[Superintelligent] systems are actually going to be agentic in a real way,' Sutskever said, as opposed to the current crop of 'very slightly agentic' AI. They’ll 'reason' and, as a result, become more unpredictable."

From @techcrunch.com at @neuripsconf.bsky.social:
December 24, 2024 at 7:18 PM
What happens if, or when, AI escapes?

With the context of OpenAI's new o3 model announcement, a new Digital Engine video features AI experts discussing existential threats from advancing AI, especially artificial general intelligence - which we currently have no way to control.

⏯️ Watch now below:
December 23, 2024 at 11:02 PM
📹 Now on YouTube: Hear FLI President Max Tegmark explain why we should develop Tool AI, not AGI, in his @websummit.bsky.social speech:
December 20, 2024 at 9:04 PM