Yoshua Bengio
banner
yoshuabengio.bsky.social
Yoshua Bengio
@yoshuabengio.bsky.social
Working towards the safe development of AI for the benefit of all at Université de Montréal, LawZero and Mila.

A.M. Turing Award Recipient and most-cited AI researcher.

https://lawzero.org/en
https://yoshuabengio.org/profile/
Pinned
Today marks a big milestone for me. I'm launching @law-zero.bsky.social, a nonprofit focusing on a new safe-by-design approach to AI that could both accelerate scientific discovery and provide a safeguard against the dangers of agentic AI.
Every frontier AI system should be grounded in a core commitment: to protect human joy and endeavour. Today, we launch LawZero, a nonprofit dedicated to advancing safe-by-design AI. lawzero.org
We're glad to have you at LawZero, Iulian. Bienvenue dans l'équipe!
We are thrilled to welcome Iulian Serban to LawZero as Senior Director, Research and Development.

As former founder of Korbit, he brings deep expertise in GenAI, software security, and research to our mission.

Full press release: lawzero.org/en/news/lawz...
November 26, 2025 at 3:54 PM
I’m pleased to share the Second Key Update to the International AI Safety Report, which outlines how AI developers, researchers, and policymakers are approaching technical risk management for general-purpose AI systems.
(1/6)
November 25, 2025 at 12:06 PM
Geopolitical competition leaves AI bridge powers in a difficult situation where they’ll soon likely face insurmountable barriers to independent frontier AI development. To stay relevant and thrive economically, they need to work together and strategically choose their AI development approaches.
November 24, 2025 at 4:26 PM
Open-weight models are becoming increasingly capable while creating risks beyond those that already exist for closed-weight models.
To continue benefiting from the advantages of open-weight models, we must develop risk mitigation methodologies specifically for them, as discussed in this paper.
🚨New paper🚨

From a technical perspective, safeguarding open-weight model safety is AI safety in hard mode. But there's still a lot of progress to be made. Our new paper covers 16 open problems.

🧵🧵🧵
November 12, 2025 at 6:21 PM
I was very honoured to receive the Queen Elizabeth Prize for Engineering from His Majesty King Charles III this week, and pleased to hear his thoughts on AI safety as well as his hopes that we can minimize the risks while collectively reaping the benefits.
November 7, 2025 at 9:33 PM
Thank you to @financialtimes.com for the invitation to speak at today's FT Summit and to Cristina Criddle for the excellent discussion.
We touched on AI's early signs of self-preservation and deceptive behaviours, as well as the technical and policy solutions on the horizon.
November 6, 2025 at 9:02 PM
We need innovative technical and societal solutions to mitigate AI risks. I believe liability insurance for AI developers could be an excellent market-based incentive to drive safety standards and accountability, and is an option worth considering.
www.ft.com/content/181f...
Force AI firms to buy nuclear-style insurance, says Yoshua Bengio
Turing Prize winner urges governments to require tech groups to cover catastrophic outcomes and fund safety research
www.ft.com
November 6, 2025 at 7:25 PM
Europe has a chance to shape a safer and more values-aligned future for AI innovation, and needs to. This was my main message at the AI in Science Summit in Copenhagen today.
I also presented Scientist AI, LawZero's approach to create technical guardrails and help accelerate scientific discovery.
November 3, 2025 at 5:22 PM
Had a great discussion at the Paris Peace Forum yesterday with @jacindaardern.bsky.social, Vilas Dhar, Robin Geiss, Nicholas Butts and @heroceane.bsky.social.

www.youtube.com/watch?app=de...
From Safety to Security: Governing Adversarial Use of AI
YouTube video by Paris Peace Forum
www.youtube.com
October 31, 2025 at 1:48 PM
It was a pleasure speaking at the Munich AI Lecture Series last week to present the risks of uncontrolled AI agency and the opportunities we have to create technical and policy solutions.
October 30, 2025 at 4:12 PM
Full talk available here: youtu.be/UgZVc0-00t0?...
October 30, 2025 at 9:31 AM
Dans cette entrevue avec @lexpress.fr, j'ai pu souligner mon optimisme concernant la possibilité de trouver des solutions techniques aux risques de l'IA, mais aussi l'urgence pour l'Europe de maintenir sa compétitivité et son indépendance dans le secteur.
www.lexpress.fr/economie/hig...
Yoshua Bengio, prix Turing : "L'explosion de la bulle IA ? Je l'espère presque..."
Le parrain de l'IA moderne alerte sur les avancées des modèles de raisonnement qui peuvent aller jusqu'à tricher pour assurer leur survie. Il invite aussi l'Europe à se réveiller si elle ne veut pas s...
www.lexpress.fr
October 30, 2025 at 7:37 AM
Thanks @marietjeschaake.bsky.social for a great discussion today at the Paris Peace Forum.
We got to cover both technical and policy topics, from the research being done at @law-zero.bsky.social to build technical solutions for safe-by-design AI systems to the importance of European AI sovereignty.
October 29, 2025 at 1:59 PM
Reposted by Yoshua Bengio
LawZero is growing fast, and we're always looking for dedicated people to join our team.
If you're interested in working on technical safeguards to create safe-by-design AI systems, check out the openings on our website and don't hesitate to reach out to our team!
job-boards.greenhouse.io/lawzero
LawZero
<p>&nbsp;</p> <p><strong>About LawZero</strong></p> <p>LawZero is a non-profit organization committed to advancing research and creating technical solutions that enable safe-by-design AI systems. Its ...
job-boards.greenhouse.io
October 24, 2025 at 2:58 PM
In an op-ed published today in TIME, Charlotte Stix and I discuss the serious risks associated with internal deployment by frontier AI companies.
We argue that maintaining transparency and effective public oversight are essential to safely manage the trajectory of AI.
time.com/7327327/ai-w...
When it Comes to AI, What We Don't Know Can Hurt Us
Yoshua Bengio and Charlotte Stix explain how companies' internal, often private, AI development is a threat to society.
time.com
October 22, 2025 at 8:06 PM
Frontier AI could reach or surpass human level within just a few years. This could help solve global issues, but also carries major risks. To move forward safely, we must develop robust technical guardrails and make sure the public has a much stronger say. superintelligence-statement.org
October 22, 2025 at 4:24 PM
AI is evolving too quickly for an annual report to suffice. To help policymakers keep pace, we're introducing the first Key Update to the International AI Safety Report. 🧵⬇️

(1/10)
October 15, 2025 at 10:49 AM
This op-ed published in the @nytimes.com by Stephen Witt thoughtfully captures the urgency and complexity of navigating AI's risks, but also my sincere conviction that technical solutions are possible — we still have agency and an opportunity to act.

www.nytimes.com/2025/10/10/o...
Opinion | The A.I. Prompt That Could End the World
www.nytimes.com
October 10, 2025 at 3:55 PM
Ce fut un plaisir de discuter avec Daron Acemoglu des impacts sociaux, politiques et économiques de l’IA sur scène hier à Montréal.
Toutes mes félicitations à Daron Acemoglu pour son doctorat honoris causa de l'UQAM.

www.ledevoir.com/economie/923...
October 7, 2025 at 6:05 PM
I got the chance to join Daron Acemoglu onstage yesterday for a great discussion on AI’s social, political and economic impacts.
Congratulations @dacemoglumit.bsky.social on receiving an honorary doctorate from UQAM — a well-deserved honour and a pleasure to see you in Montréal!
October 7, 2025 at 6:04 PM
I recently spoke to the @wsj.com about my concerns with AI's trajectory, and the current competitive dynamics which push AI's capabilities forward without sufficient safety assurances.

Thanks to Isabelle Bousquette for a great conversation!

www.wsj.com/articles/a-g...
A ‘Godfather of AI’ Remains Concerned as Ever About Human Extinction
Yoshua Bengio worries about AI’s capacity to deceive users in pursuit of its own goals. “The scenario in “2001: A Space Odyssey” is exactly like this,” he says.
www.wsj.com
October 3, 2025 at 3:30 PM
It was an honour to speak at the United Nations this week to address the UN Security Council on the impacts of AI on international peace & security, and to join the high-level multi-stakeholder informal meeting to launch the Global Dialogue on AI Governance.
September 26, 2025 at 9:12 PM
Je suis passé au balado RAD pour discuter de l'avenir de l'IA, des risques de la course actuelle dans le domaine, et de solutions pour une IA plus sécuritaire.

Merci à @olivierarbourmasse.bsky.social et à l'équipe de RAD pour l'excellente conversation!

www.youtube.com/watch?v=JxFI...
Intelligence artificielle : les risques de perdre le contrôle avec Yoshua Bengio | Le balado de Rad
YouTube video by Rad
www.youtube.com
September 23, 2025 at 3:33 PM
Establishing where we collectively draw red lines is essential to prevent unacceptable AI risks.

See the statement signed by myself and over 200 prominent figures:
red-lines.ai
200+ prominent figures endorse Global Call for AI Red Lines
AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children...
red-lines.ai
September 22, 2025 at 5:37 PM
Very pleased to be speaking with Harry Booth next week at #ALLIN2025, Canada’s premier AI event.

We’ll be discussing some of the biggest topics in AI right now during the keynote session titled “AI at a Defining Moment: Ensuring Safety Through Technical and Societal Safeguards.”

allinevent.ai
September 22, 2025 at 2:12 PM