Future of Life Institute
banner
futureoflife.org
Future of Life Institute
@futureoflife.org
We work on reducing extreme risks and steering transformative technologies to benefit humanity.

Learn more: futureoflife.org
👉 As reviewer Stuart Russell put it, “Some companies are making token efforts, but none are doing enough… This is not a problem for the distant future; it’s a problem for today.”

🔗 Read the full report now: futureoflife.org/ai-safety-in...
2025 AI Safety Index - Future of Life Institute
The Summer 2025 edition of our AI Safety Index, in which AI experts rate leading AI companies on key safety and security domains.
futureoflife.org
July 18, 2025 at 8:05 PM
6️⃣ OpenAI secured second place, ahead of Google DeepMind.

7️⃣ Chinese AI firms Zhipu AI and DeepSeek received failing overall grades.

🧵
July 18, 2025 at 8:05 PM
3️⃣ Only 3 out of 7 firms report substantive testing for dangerous capabilities linked to large-scale risks such as bio- or cyber-terrorism (Anthropic, OpenAI, and Google DeepMind).

4️⃣ Whistleblowing policy transparency remains a weak spot.

5️⃣ Anthropic received the best overall grade (C+).

🧵
July 18, 2025 at 8:05 PM
Key takeaways:
1️⃣ The AI industry is fundamentally unprepared for its own stated goals.

2️⃣ Capabilities are accelerating faster than risk-management practice, and the gap between firms is widening.

🧵
July 18, 2025 at 8:05 PM
🔗 Read more about these AI safety research priorities: aisafetypriorities.org
The Singapore Consensus on Global AI Safety Research Priorities
Building a Trustworthy, Reliable and Secure AI Ecosystem. Read the full report online, or download the PDF.
aisafetypriorities.org
May 8, 2025 at 7:29 PM
➡️ The Singapore Consensus, building on the International AI Safety Report backed by 33 countries, aims to enable more impactful R&D to quickly create safety and evaluation mechanisms, fostering a trustworthy, reliable, secure ecosystem where AI is used for the public good.
May 8, 2025 at 7:29 PM
➕ Be sure to check out @asterainstitute.bsky.social's Residency program, now accepting applications for the Oct. 2025 cohort! The program supports "creative, high-agency scientists, engineers and entrepreneurs" in future-focused, high-impact, open-first innovation.

Learn more: astera.org/residency
April 4, 2025 at 8:36 PM
🔗 Listen to the episode now on your favourite podcast player, or here: www.youtube.com/watch?v=kJ0K...
Brain-like AGI and why it's Dangerous (with Steven Byrnes)
YouTube video by Future of Life Institute
www.youtube.com
April 4, 2025 at 8:36 PM
💪 Foster transparent development through an AI industry whistleblower program and mandatory security incident reporting.
March 18, 2025 at 5:57 PM
🧰 Protect American workers and critical infrastructure from AI-related threats by tracking labor displacement and placing export controls on advanced AI models.
March 18, 2025 at 5:57 PM
🚫 Ensure AI systems are free from ideological agendas, and ban models with superhuman persuasive abilities.
March 18, 2025 at 5:57 PM
🚨 Protect the presidency from loss of control by mandating “off-switches"; a targeted moratorium on developing uncontrollable AI systems; and enforcing strong antitrust measures.
March 18, 2025 at 5:57 PM