Toby Ord
banner
tobyord.bsky.social
Toby Ord
@tobyord.bsky.social
Senior Researcher at Oxford University.
Author — The Precipice: Existential Risk and the Future of Humanity.
tobyord.com
Reposted by Toby Ord
Being born is a roll of the dice.

Most of us got insanely lucky.

Imagine you had to roll again, how would you want the world to look?

www.givingwhatwecan.org/birth-lottery
Birth Lottery
If you were reborn today, where would you land? And how would that change your life?
www.givingwhatwecan.org
December 23, 2025 at 5:27 PM
Dim Red Dot
Scientists have just released a photo featuring a dim red dot. It is the light of a single star exploding in a galaxy so far far away that that nothing we do could ever affect it — even in the very fullness of time.
It lies beyond the Affectable Universe.
Let me explain…
1/🧵
December 18, 2025 at 11:17 AM
Reposted by Toby Ord
New report on trends in AISI's evaluations of frontier AI models over the past two years. A lot of AI discourse focuses on viral moments, but it is important to zoom out to the less flashy trend: AI models are steadily growing in capabilities, including for dual-use.

www.aisi.gov.uk/frontier-ai-...
December 18, 2025 at 10:06 AM
Reposted by Toby Ord
It has become received wisdom in Brussels and Washington that there is a new “euro-sclerosis”: that the EU economy is lagging the US

This view is wrong

A little primer on the measurement of productivity – and why reports of the economic death of Europe are greatly exaggerated🧵
December 12, 2025 at 12:32 PM
Reposted by Toby Ord
Today is Giving Tuesday, and you can 100x the impact of your donations by finding the most effective charities.

This year, needs across global health, animal welfare, and catastrophic risk are rising while some major funders step back
December 2, 2025 at 2:42 PM
Reposted by Toby Ord
New Google DeepMind paper: "Consistency Training Helps Stop Sycophancy and Jailbreaks" by @alexirpan.bsky.social, me, Mark Kurzeja, David Elson, and Rohin Shah. (thread)
November 4, 2025 at 12:18 AM
Reposted by Toby Ord
Frontier AI could reach or surpass human level within just a few years. This could help solve global issues, but also carries major risks. To move forward safely, we must develop robust technical guardrails and make sure the public has a much stronger say. superintelligence-statement.org
October 22, 2025 at 4:24 PM
Reposted by Toby Ord
In an op-ed published today in TIME, Charlotte Stix and I discuss the serious risks associated with internal deployment by frontier AI companies.
We argue that maintaining transparency and effective public oversight are essential to safely manage the trajectory of AI.
time.com/7327327/ai-w...
When it Comes to AI, What We Don't Know Can Hurt Us
Yoshua Bengio and Charlotte Stix explain how companies' internal, often private, AI development is a threat to society.
time.com
October 22, 2025 at 8:06 PM
The other evening I attended the launch of David Edmonds' book on Peter Singer's Shallow Pond. I was quite struck when he called it 'the most influential thought experiment in the history of moral philosophy' yet with no influence for its first 30 years…
🧵
press.princeton.edu/books/hardco...
Death in a Shallow Pond
From the bestselling coauthor of Wittgenstein’s Poker, a fascinating account of Peter Singer’s controversial “drowning child” thought experiment—and how it changed the way people think about charitabl...
press.princeton.edu
October 13, 2025 at 4:55 PM
Reposted by Toby Ord
We’re hiring!

Society isn’t prepared for a world with superhuman AI. If you want to help, consider applying to one of our research roles:
forethought.org/careers/res...

Not sure if you’re a good fit? See more in the reply (or just apply — it doesn’t take long)
October 13, 2025 at 8:14 AM
Evidence Recent AI Gains are Mostly from Inference-Scaling
🧵
Here's a thread about my latest post on AI scaling …
1/14
www.tobyord.com/writing/most...
Evidence that Recent AI Gains are Mostly from Inference-Scaling — Toby Ord
In the last year or two, the most important trend in modern AI came to an end. The scaling-up of computational resources used to train ever-larger AI models through next-token prediction ( pre-trainin...
www.tobyord.com
October 3, 2025 at 7:31 PM
Reposted by Toby Ord
"It has gone largely unnoticed that time spent on social media peaked in 2022 and has since gone into steady decline."

By @jburnmurdoch.ft.com

www.ft.com/content/a072...
October 3, 2025 at 12:04 PM
Reposted by Toby Ord
✍️ New article: “Foreign aid from the United States saved millions of lives each year”

For decades, these aid programs received bipartisan support and made a difference. Cutting them will cost lives.
September 30, 2025 at 9:20 AM
An insightful piece by Deena Mousa about how AI performs extremely well at benchmarks for reading medical scans, yet isn't putting radiologists out of work. Lots to learn for other knowledge-work professions here.
www.worksinprogress.news/p/why-ai-isn...
AI isn't replacing radiologists
Radiology combines digital images, clear benchmarks, and repeatable tasks. But demand for human radiologists is ay an all-time high.
www.worksinprogress.news
September 29, 2025 at 9:01 AM
Evaluating the Infinite
🧵
My latest paper tries to solve a longstanding problem afflicting fields such as decision theory, economics, and ethics — the problem of infinities.
Let me explain a bit about what causes the problem and how my solution avoids it.
1/N
arxiv.org/abs/2509.19389
Evaluating the Infinite
I present a novel mathematical technique for dealing with the infinities arising from divergent sums and integrals. It assigns them fine-grained infinite values from the set of hyperreal numbers in a ...
arxiv.org
September 25, 2025 at 3:28 PM
Reposted by Toby Ord
Establishing where we collectively draw red lines is essential to prevent unacceptable AI risks.

See the statement signed by myself and over 200 prominent figures:
red-lines.ai
200+ prominent figures endorse Global Call for AI Red Lines
AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children...
red-lines.ai
September 22, 2025 at 5:37 PM
The Extreme Inefficiency of RL for Frontier Models
🧵
The switch from training frontier models by next-token-prediction to reinforcement learning (RL) requires 1,000s to 1,000,000s of times as much compute per bit of information the model gets to learn from…
1/11
www.tobyord.com/writing/inef...
The Extreme Inefficiency of RL for Frontier Models — Toby Ord
The new scaling paradigm for AI reduces the amount of information a model could learn per hour of training by a factor of 1,000 to 1,000,000. I explore what this means and its implications for scaling...
www.tobyord.com
September 19, 2025 at 5:18 PM
I'm overjoyed to see that more than 10,000 people have joined me in pledging 10% of their lifetime income to help others as effectively as they can. We're each able to do so much — and so much more together.
🧵 @givingwhatwecan.bsky.social
August 15, 2025 at 10:59 AM
Reposted by Toby Ord
Let’s take a look into GPT-5’s record-setting performance on FrontierMath. How did it perform on the holdout vs. non-holdout set, how did it do across tiers, and what new Tier 4 problems did it solve? 🧵
August 14, 2025 at 11:03 PM
Reposted by Toby Ord
Should we expect widespread moral progress in the future?

In a new paper, Convergence and Compromise, @FinMoorhouse and I discuss this.

Thread.
August 8, 2025 at 6:33 PM
Reposted by Toby Ord
The Code of Practice is out. I co-wrote the Safety & Security Chapter, which is an implementation tool to help frontier AI companies comply with the EU AI Act in a lean but effective way. I am proud of the result!
1/3
July 10, 2025 at 11:53 AM
Reposted by Toby Ord
Happy to announce that our paper "Systemic contributions to global catastrophic risk" now is out in Global Sustainability. Short version: systemic risk and global catastrophic risk obviously go together, and we need to link the fields more closely. www.cambridge.org/core/journal...
June 26, 2025 at 9:25 PM
Reposted by Toby Ord
There are some interesting details about how Anthropic trained their models tucked away in today's summary judgement: they bought, chopped up and scanned millions of dollars worth of books! simonwillison.net/2025/Jun/24/...
Anthropic wins a major fair use victory for AI — but it’s still in trouble for stealing books
Major USA legal news for the AI industry today. Judge William Alsup released a "summary judgement" (a legal decision that results in some parts of a case skipping a trial) …
simonwillison.net
June 24, 2025 at 10:09 PM
Reposted by Toby Ord
New podcast episode with @tobyord.bsky.social — on inference scaling, time horizons for AI agents, lessons from scientific moratoria, and more.

pnc.st/s/forecast/...
Inference Scaling, AI Agents, and Moratoria (with Toby Ord)
Toby Ord is a Senior Researcher at Oxford University. We discuss the ‘scaling paradox’, inference scaling and its implications, ways to interpret trends in the length of tasks AI agents can complete, and some unpublished thoughts on lessons from scientifi
pnc.st
June 16, 2025 at 10:36 AM
Reposted by Toby Ord
Fin Moorhouse interviews @tobyord.bsky.social about the future of AI and the risks it involves. Clear and illuminating.

pnc.st/s/forecast/5...
Inference Scaling, AI Agents, and Moratoria (with Toby Ord)
Toby Ord is a Senior Researcher at Oxford University. We discuss the ‘scaling paradox’, inference scaling and its implications, ways to interpret trends in the length of tasks AI agents can complete,...
pnc.st
June 17, 2025 at 8:49 AM