Yoshua Bengio
banner
yoshuabengio.bsky.social
Yoshua Bengio
@yoshuabengio.bsky.social
Working towards the safe development of AI for the benefit of all at Université de Montréal, LawZero and Mila.

A.M. Turing Award Recipient and most-cited AI researcher.

https://lawzero.org/en
https://yoshuabengio.org/profile/
Geopolitical competition leaves AI bridge powers in a difficult situation where they’ll soon likely face insurmountable barriers to independent frontier AI development. To stay relevant and thrive economically, they need to work together and strategically choose their AI development approaches.
November 24, 2025 at 4:26 PM
I was very honoured to receive the Queen Elizabeth Prize for Engineering from His Majesty King Charles III this week, and pleased to hear his thoughts on AI safety as well as his hopes that we can minimize the risks while collectively reaping the benefits.
November 7, 2025 at 9:33 PM
Thank you to @financialtimes.com for the invitation to speak at today's FT Summit and to Cristina Criddle for the excellent discussion.
We touched on AI's early signs of self-preservation and deceptive behaviours, as well as the technical and policy solutions on the horizon.
November 6, 2025 at 9:02 PM
Thank you to the University of Copenhagen, the European Commission @ec.europa.eu, and my co-panelists — Lene Oddershede, Max Welling, Peter Sarlin & @fabiantheis.bsky.social — for a day of excellent discussions.
November 3, 2025 at 5:22 PM
Europe has a chance to shape a safer and more values-aligned future for AI innovation, and needs to. This was my main message at the AI in Science Summit in Copenhagen today.
I also presented Scientist AI, LawZero's approach to create technical guardrails and help accelerate scientific discovery.
November 3, 2025 at 5:22 PM
It was a pleasure speaking at the Munich AI Lecture Series last week to present the risks of uncontrolled AI agency and the opportunities we have to create technical and policy solutions.
October 30, 2025 at 4:12 PM
Thanks @marietjeschaake.bsky.social for a great discussion today at the Paris Peace Forum.
We got to cover both technical and policy topics, from the research being done at @law-zero.bsky.social to build technical solutions for safe-by-design AI systems to the importance of European AI sovereignty.
October 29, 2025 at 1:59 PM
Frontier AI could reach or surpass human level within just a few years. This could help solve global issues, but also carries major risks. To move forward safely, we must develop robust technical guardrails and make sure the public has a much stronger say. superintelligence-statement.org
October 22, 2025 at 4:24 PM
AI is evolving too quickly for an annual report to suffice. To help policymakers keep pace, we're introducing the first Key Update to the International AI Safety Report. 🧵⬇️

(1/10)
October 15, 2025 at 10:49 AM
Ce fut un plaisir de discuter avec Daron Acemoglu des impacts sociaux, politiques et économiques de l’IA sur scène hier à Montréal.
Toutes mes félicitations à Daron Acemoglu pour son doctorat honoris causa de l'UQAM.

www.ledevoir.com/economie/923...
October 7, 2025 at 6:05 PM
I got the chance to join Daron Acemoglu onstage yesterday for a great discussion on AI’s social, political and economic impacts.
Congratulations @dacemoglumit.bsky.social on receiving an honorary doctorate from UQAM — a well-deserved honour and a pleasure to see you in Montréal!
October 7, 2025 at 6:04 PM
It was an honour to speak at the United Nations this week to address the UN Security Council on the impacts of AI on international peace & security, and to join the high-level multi-stakeholder informal meeting to launch the Global Dialogue on AI Governance.
September 26, 2025 at 9:12 PM
Very pleased to be speaking with Harry Booth next week at #ALLIN2025, Canada’s premier AI event.

We’ll be discussing some of the biggest topics in AI right now during the keynote session titled “AI at a Defining Moment: Ensuring Safety Through Technical and Societal Safeguards.”

allinevent.ai
September 22, 2025 at 2:12 PM
I'm currently in Rome as part of the AI & Fraternity Working Group with a small group of AI experts invited by the Fratelli Tutti Foundation.
Very grateful to be able to reflect on the role & risks of AI for humanity, and for the chance to discuss with Fr. Paolo Benanti in person.
September 11, 2025 at 4:12 PM
It was a pleasure to meet Prime Minister @mark-carney.bsky.social and Minister Evan Solomon today as part of an official visit at @mila-quebec.bsky.social with many collaborators from our AI ecosystem. We discussed AI risk mitigation, sovereignty & economic potential among other important topics.
August 20, 2025 at 10:23 PM
The Code of Practice is out. I co-wrote the Safety & Security Chapter, which is an implementation tool to help frontier AI companies comply with the EU AI Act in a lean but effective way. I am proud of the result!
1/3
July 10, 2025 at 11:53 AM
Je suis très honoré de cette nomination à titre d’officier de l’Ordre national du Québec; cette nation où j’ai grandi, étudié, enseigné, et qui, il y a une dizaine d’années, a été l'une des premières à soutenir développement responsable & éthique de l'IA www.ordre-national.gouv.qc.ca/actualites/n...
June 12, 2025 at 3:32 PM
Le 28 février, j'ai eu l'honneur de recevoir la Médaille du couronnement du roi Charles III en reconnaissance de mes contributions en apprentissage profond. Je remercie le Sénateur Andrew Cardozo de m'avoir remis cette importante distinction!
March 6, 2025 at 4:48 PM
On Feb. 28, I was honoured to receive the King Charles III Coronation Medal in recognition of my contributions to deep learning. Thank you to Senator Andrew Cardozo for presenting me with this important distinction!
March 6, 2025 at 4:47 PM
I would also like to thank the 30 countries, the OECD, EU, and UN, who supported this process.

I look forward to the important discussions that will unfold during the AI Action Summit in Paris on February 10-11 and in various other forums in the coming months.

21/21
January 29, 2025 at 1:50 PM
Open-weights models are models whose central components are shared publicly for download.

They boost transparency and facilitate research, but they can also facilitate malicious or misguided use that is difficult or impossible for developers to monitor or mitigate.

14/21
January 29, 2025 at 1:50 PM
While technically feasible, whether scaling will solve AI’s remaining limitations remains debated. Continued scaling will face physical constraints, but training runs using 10,000x more computation than GPT-4 appear technically feasible by 2030.

9/21
January 29, 2025 at 1:50 PM
Increasingly, companies are investing in the development of AI agents – AI systems that can plan and act with little to no human oversight.

AI companies are increasingly using agents and other AI systems to accelerate AI development itself.

7/21
January 29, 2025 at 1:50 PM
Since the end of the writing period for the Report, data on two new models – o3 & R1 – has suggested that advances in capabilities continue to be rapid.

There has now been a breakthrough on a key abstract reasoning test (ARC-AGI), believed to be out of reach until recently, and other tests.

6/21
January 29, 2025 at 1:50 PM
The capabilities of general-purpose AI have increased rapidly in recent years and months.

For example, we now see models whose performance matches human experts in answering PhD-level questions on biology, chemistry, and physics.

5/21
January 29, 2025 at 1:50 PM