benjamintodd.bsky.social
@benjamintodd.bsky.social
AGI might be here soon, but what can you actually do about it? I'm writing a new guide to that. Stay tuned for updates, but here's a summary:
April 29, 2025 at 5:19 PM
7/ So if you can find a role that helps over the next 5-10 years, that seems like the highest expected-impact thing you can do.

Though, I don't think it's for everyone:
April 29, 2025 at 4:15 PM
6/ The chance of building powerful AI is unusually high between now and around 2030, making the next 5 years especially critical.

If AGI emerges in the next 5 years, you’ll be part of one of the most important transitions in human history. If not, you’ll have time to return to your previous path.
April 29, 2025 at 4:15 PM
It's often possible to transition with just ~100h of reading and speaking to people in the field. You don't need to be technical – there are many other ways to help.
April 29, 2025 at 4:15 PM
5/ A few years it was much harder to help, but today there are more and more concrete jobs working on these issues.
April 29, 2025 at 4:15 PM
3/ These accelerations bring a range of major risks, not just misalignment, but also concentration of power, new weapons of mass destruction, great power conflict, treatment of digital beings, and more.
April 29, 2025 at 4:15 PM
2/ Lots of people hype AI as 'transformative' but few internalise how crazy it could really be. There's three different types of possible acceleration, which are much more grounded in empirical research than a couple of years ago.
April 29, 2025 at 4:15 PM
Why to quit your job and work on risks from AI (the short version) 🧵
April 29, 2025 at 4:14 PM
5/ Samotsvety are a team of some of the most elite forecasters out there, who also know more about AGI.

In 2023, they gave shorter estimates: 25% by 2029.

This was also down vs. their 2022 forecast.

But unfortunately this still used the terrible Metaculus definition.
April 9, 2025 at 10:03 PM
3/ So what about forecasting experts?

The Metaculus AGI Q has 1000+ forecasts.

The median has fallen from 50 years to 5.

Unfortunately, the definition is both too stringent for AGI, and not stringent enough. So I'm skeptical of the specific numbers.
April 9, 2025 at 10:03 PM
In 2022 (blue), they forecast AI wouldn't be able to write simple python code until 2027.

And even in 2023 (red), they predicted 2025!

They gave much longer answers for "full automation of labour" for unclear reasons.

Also AI expertise ≠ forecasting expertise.
April 9, 2025 at 10:03 PM
1/ First up, AI company leaders.

They tend to be most bullish – predicting AGI in 2-5 years.

It's obvious why they might be biased.

But I don't think should be totally ignored – they have the most visibility into next gen capabilities.

(And have been more right about recent progress.)
April 9, 2025 at 10:03 PM
What can experts tell us about when AGI will arrive?

Maybe not much. Except that it's coming sooner than before.

I did a review of the 5 most relevant expert groups and what we can learn from them..
April 9, 2025 at 10:03 PM
Why has there been so little AI automation, despite great benchmark scores? This chart is a big part of the answer.

Models today can do tasks up to 1h.

But real jobs mainly consist of tasks taking days or weeks.

So AI can answer questions but can't do real jobs.

But that's about to change..
April 8, 2025 at 3:50 PM
Other meaningful arguments against:
April 6, 2025 at 3:13 PM
The strongest counterargument?

Current AI methods might plateau on ill-defined, contextual, long-horizon tasks—which happens to be most knowledge work.

Without continuous breakthroughs, profit margins fall and investment dries up.

You can boil it down to whether this trend will continue:
April 6, 2025 at 3:13 PM
5. While real-world deployment faces many hurdles, AI is already very useful in virtual and verifiable domains:

• Software engineering & startups
• Scientific research
• AI development itself

These alone could drive massive economic impact and accelerate AI progress.
April 6, 2025 at 3:13 PM
4. By 2030, AI training compute will far surpass estimates for the human brain.

If algorithms approach even a fraction of human learning efficiency, we'd expect human-level capabilities in at least some domains.

www.cold-takes.com/forecasting...
April 6, 2025 at 3:13 PM
3. Expert forecasts have consistently moved earlier.

AI and forecasting experts now place significant probability on AGI-level capabilities pre-2030.

I remember when 2045 was considered optimistic.

80000hours.org/2025/03/whe...
April 6, 2025 at 3:13 PM
2. Benchmark extrapolation suggests in 2028 we'll see systems with superhuman coding and reasoning that can autonomously complete multi-week tasks.

All the major benchmarks ↓
April 6, 2025 at 3:13 PM
1. The four recent drivers of progress don't run into bottlenecks until at least 2028.

And with investment in compute and algorithms continuing to increase, new drivers are likely to be discovered.
April 6, 2025 at 3:13 PM
AI CEOs claim AGI will be here in 2-5 years.

Is this just hype, or could they be right?

I spent weeks writing this new in-depth primer on the best arguments for and against.

Starting with the case for...🧵
April 6, 2025 at 3:13 PM
5. By 2030, AI training compute will far surpass estimates for the human brain.

If algorithms approach even a fraction of human learning efficiency, we'd expect human-level capabilities in at least some domains.

www.cold-takes.com/forecasting...
April 6, 2025 at 2:37 PM
4. Expert forecasts have consistently moved earlier.

AI and forecasting experts now place significant probability on AGI-level capabilities pre-2030.

I remember when 2045 was considered optimistic.

80000hours.org/2025/03/whe...
April 6, 2025 at 2:37 PM
3. Benchmark extrapolation suggests in 2028 we'll see systems with superhuman coding and reasoning that can autonomously complete multi-week tasks.

All the major benchmarks ↓
April 6, 2025 at 2:37 PM