benjamintodd.bsky.social
@benjamintodd.bsky.social
If you want to switch, speak to @80000Hours one-on-one and they can help you with planning & introductions.

80000hours.org/speak-with-us/

And follow the links in the full post for more:
benjamintodd.substack.com/p/work-on-a...
Should you quit your job – and work on risks from AI?
In five years, we could have AI systems capable of accelerating science and automating skilled jobs.
benjamintodd.substack.com
April 29, 2025 at 4:15 PM
7/ So if you can find a role that helps over the next 5-10 years, that seems like the highest expected-impact thing you can do.

Though, I don't think it's for everyone:
April 29, 2025 at 4:15 PM
6/ The chance of building powerful AI is unusually high between now and around 2030, making the next 5 years especially critical.

If AGI emerges in the next 5 years, you’ll be part of one of the most important transitions in human history. If not, you’ll have time to return to your previous path.
April 29, 2025 at 4:15 PM
It's often possible to transition with just ~100h of reading and speaking to people in the field. You don't need to be technical – there are many other ways to help.
April 29, 2025 at 4:15 PM
5/ A few years it was much harder to help, but today there are more and more concrete jobs working on these issues.
April 29, 2025 at 4:15 PM
4/ Under 10,000 people work full-time reducing important aspects of these risks – tiny compared to the millions working on established issues like climate change, or the number of people trying to deploy the technology as quickly as possible.
April 29, 2025 at 4:15 PM
3/ These accelerations bring a range of major risks, not just misalignment, but also concentration of power, new weapons of mass destruction, great power conflict, treatment of digital beings, and more.
April 29, 2025 at 4:15 PM
2/ Lots of people hype AI as 'transformative' but few internalise how crazy it could really be. There's three different types of possible acceleration, which are much more grounded in empirical research than a couple of years ago.
April 29, 2025 at 4:15 PM
Combining the groups, I think it's fair to AGI by 2030 is within the bounds of expert opinion.

There's a lot of uncertainty, but high uncertainty means we can neither rule it out, nor rule it in.

And every group agrees it's coming sooner.

Full post:
80000hours.org/2025/03/when...
When do experts expect AGI to arrive?
As a non-expert, it would be great if there were experts who could tell us when we should expect artificial general intelligence (AGI) to arrive. Unfortunately, there aren't. There are only different ...
80000hours.org
April 9, 2025 at 10:03 PM
5/ Samotsvety are a team of some of the most elite forecasters out there, who also know more about AGI.

In 2023, they gave shorter estimates: 25% by 2029.

This was also down vs. their 2022 forecast.

But unfortunately this still used the terrible Metaculus definition.
April 9, 2025 at 10:03 PM
4/ XPT surveyed 33 superforecasters in 2022.

They gave much longer answers: 25% chance by 2047.

But 2022 is before the great timeline shortening.

And their predictions about compute have already been falsified, and they don't seem to know that much about AI.

More:
asteriskmag.com/issues/03/th...
Through a Glass Darkly—Asterisk
Nobody predicted the AI revolution, except for the 352 experts who were asked to predict it.
asteriskmag.com
April 9, 2025 at 10:03 PM
3/ So what about forecasting experts?

The Metaculus AGI Q has 1000+ forecasts.

The median has fallen from 50 years to 5.

Unfortunately, the definition is both too stringent for AGI, and not stringent enough. So I'm skeptical of the specific numbers.
April 9, 2025 at 10:03 PM
In 2022 (blue), they forecast AI wouldn't be able to write simple python code until 2027.

And even in 2023 (red), they predicted 2025!

They gave much longer answers for "full automation of labour" for unclear reasons.

Also AI expertise ≠ forecasting expertise.
April 9, 2025 at 10:03 PM
2/ To reduce bias, we could consider a wider range of AI experts and in the AI Impacts survey of thousands of published AI authors.

Median: 25% chance of AI better than humans at "all tasks" by 2032.

But this is from 2023, their answers have been too pessimistic historically.
April 9, 2025 at 10:03 PM
1/ First up, AI company leaders.

They tend to be most bullish – predicting AGI in 2-5 years.

It's obvious why they might be biased.

But I don't think should be totally ignored – they have the most visibility into next gen capabilities.

(And have been more right about recent progress.)
April 9, 2025 at 10:03 PM
Projecting the trend forward:

In 2 years, can do many 1-day computer use tasks

In 4 years, many 1-week tasks

On substack I argue we should expect the trend to continue, and discuss some limitations:

benjamintodd.substack.com/p/the-most-i...
The most important graph in AI right now: time horizon
To understand how close we are to transformative AI, here’s the metric I find most interesting right now: how long are the tasks AI can do?
benjamintodd.substack.com
April 8, 2025 at 3:50 PM
Other meaningful arguments against:
April 6, 2025 at 3:13 PM
The strongest counterargument?

Current AI methods might plateau on ill-defined, contextual, long-horizon tasks—which happens to be most knowledge work.

Without continuous breakthroughs, profit margins fall and investment dries up.

You can boil it down to whether this trend will continue:
April 6, 2025 at 3:13 PM
5. While real-world deployment faces many hurdles, AI is already very useful in virtual and verifiable domains:

• Software engineering & startups
• Scientific research
• AI development itself

These alone could drive massive economic impact and accelerate AI progress.
April 6, 2025 at 3:13 PM