Peter Wildeford
banner
peterwildeford.bsky.social
Peter Wildeford
@peterwildeford.bsky.social
Globally ranked top 20 forecaster 🎯

AI is not a normal technology. I'm working at the Institute for AI Policy and Strategy (IAPS) to shape AI for global prosperity and human freedom.
Pinned
Maduro has been captured.

At 2am US Delta Force operators seized Maduro in "Absolute Resolve." By dawn, he was on a plane to New York to face narco-terrorism charges.

The operation was flawless. But what comes next is confusing. My latest blog discusses. peterwildeford.substack.com/p/maduro-has...
Maduro has been captured. What's next?
The operation was flawless. What comes next is anyone's guess.
peterwildeford.substack.com
👀‼️

Interviewer: In a perfect world, if you knew that every other company would pause, if every country would pause, would you advocate for that?

Hassabis: I think so.
January 21, 2026 at 3:02 AM
wow I can't believe that OpenAI...
(1) had an actual secret conspiracy to undermine Musk and convert to for-profit for personal financial gain and
(2) was dumb enough to actually put the conspiracy into writing
January 16, 2026 at 6:25 PM
Here's currently how I'm using each of the LLMs
January 8, 2026 at 8:15 PM
Maduro has been captured.

At 2am US Delta Force operators seized Maduro in "Absolute Resolve." By dawn, he was on a plane to New York to face narco-terrorism charges.

The operation was flawless. But what comes next is confusing. My latest blog discusses. peterwildeford.substack.com/p/maduro-has...
Maduro has been captured. What's next?
The operation was flawless. What comes next is anyone's guess.
peterwildeford.substack.com
January 3, 2026 at 11:36 PM
This was a very chilling read... worth reading in full if you have the nytimes subscription. A summary wouldn't do it justice. www.nytimes.com/2025/12/28/o...
Opinion | When A.I. Took My Job, I Bought a Chain Saw
www.nytimes.com
December 30, 2025 at 8:29 PM
It’s almost a new year and that often calls for some sort of planning.

So I want to share the Google Doc quarterly planning template that
Caroline Jeanmaire and I collaborated on and use. People seem to like it!

docs.google.com/document/d/1...
[public] Quarterly Review + Plan Template
Quarterly Review + Plan Template By Peter Wildeford and Caroline Jeanmaire — Make a copy of this (click here) and get to work! V4.1 – last updated 2025 December 22 How is this different? High level me...
docs.google.com
December 22, 2025 at 9:57 PM
Reposted by Peter Wildeford
Peter did an excellent job on this interview! And props to Ronny Chieng for single-handedly introducing shrimp welfare *and* AI safety to a broad audience :)
December 4, 2025 at 1:34 PM
It was amazing to get to sit down with Ronny Chieng and talk about AGI with The Daily Show! www.youtube.com/watch?v=RcPt...
Ronny Chieng Investigates the Promises of AI, the Most Expensive Circle Jerk Ever | The Daily Show
YouTube video by The Daily Show
www.youtube.com
December 4, 2025 at 1:27 PM
"superintelligent AI could replace humans in controlling the planet"

Bernie Sanders is right that this is a real risk that requires urgent attention.

www.theguardian.com/commentisfre...
December 2, 2025 at 8:00 PM
Will competition over advanced AI lead to war?

In this guest post, Delaney extends Fearon’s logic to show that in the run-up to ASI, states might rationally initiate war to prevent losing control of global power altogether.

peterwildeford.substack.com/p/will-compe...
Will competition over advanced AI lead to war?
Fear and Fearon
peterwildeford.substack.com
November 21, 2025 at 9:03 PM
On METR's benchmark, Kimi K2 Thinking performs about as well as Claude Sonnet 3.7 from February 24th.

This puts China ~8 months behind the US.
November 21, 2025 at 7:30 PM
Everyone keeps saying a US government Manhattan Project for AGI is inevitable.

New forecasting research says otherwise: just 34% likely.

More importantly, treating it as inevitable could trigger the exact catastrophes we're trying to prevent.

peterwildeford.substack.com/p/should-the...
Should the US do a Manhattan Project for AGI?
Such a Project is neither inevitable nor a good idea
peterwildeford.substack.com
November 20, 2025 at 4:27 PM
Chinese hackers just pulled off a fully AI cyberattack. AI did 80-90% of the work autonomously.

This changes everything about cyber warfare economics.

I got cyber experts and intelligence community professionals to help me explain in my latest post. 👇

peterwildeford.substack.com/p/ai-ran-its...
AI Ran Its First Autonomous Cyberattack
Chinese hackers used AI and changed the economics of cyberattacks
peterwildeford.substack.com
November 15, 2025 at 1:21 AM
A Chinese state-sponsored threat actor jailbroke Claude into doing real-world cyberattacks.

The AI completed roughly 80–90% of the campaign autonomously, with human operators stepping in only for about 4–6 key decision points.

www.anthropic.com/news/disrupt...
Disrupting the first reported AI-orchestrated cyber espionage campaign
A report describing an a highly sophisticated AI-led cyberattack
www.anthropic.com
November 13, 2025 at 6:32 PM
I'm interested to follow AI progress on Arc-AGI-3
November 13, 2025 at 6:26 PM
Benchmarking Chinese models is difficult.

It seems hard to balance "Chinese company overclaims their benchmark scores, need independent testing to verify" and "Independent benchmarkers can't set up the model well".
November 12, 2025 at 8:34 PM
"Obviously, no one should deploy superintelligence without being able to align and control them"

Great for OpenAI to say this! And it is obvious.

But forgive me for being concerned about OpenAI's track record of doing things they say is "obvious".

Accountability will be key.
November 10, 2025 at 4:38 PM
9 months and 8 days later, my blog has hit over 5000 subscribers 🎉

Thanks to everyone who's been reading - I hope it's been helpful!
November 5, 2025 at 3:46 PM
Both Anthropic and OpenAI are making bold statements about automating science within three years.

My independent assessment is that these timelines are too aggressive - but within 4-20 years is likely (90%CI).

We should pay attention to these statements. What if they're right?
November 2, 2025 at 7:11 AM
Everyone's calling AI a bubble. Even Sam Altman. But they're still investing hundreds of billions. What's actually going on? My new blog post explores.

peterwildeford.substack.com/p/ai-is-prob...
AI is probably not a bubble
AI companies have revenue, demand, and paths to immense value
peterwildeford.substack.com
October 29, 2025 at 5:17 PM
There's some uncertainty, but the picture is clear.

The hype crowd was wrong. We're not getting AGI in 2027.

But the progress halt crowd is also wrong. The evals are continuing on trend, as they have all year.

This is not what AI hitting a wall looks like:
October 15, 2025 at 2:58 PM
Back in 2024 February we all made fun of Altman for wanting 7 trillion

...but that was just foreshadowing his recently announced mega infrastructure plans.

Altman's plan is for 250GW by 2033, that will cost at least 7 trillion... we're not laughing now.
October 14, 2025 at 5:54 PM
My last link shortener died, so here's the updated version! Check out and get involved in AI policy!

bit.ly/ai-job-list
October 7, 2025 at 10:56 PM
There's a narrative that GPT5 has proven the end of scaling. This is false.

Claude 4.5 gives us another opportunity to see how AI trends are holding up. We can project current trends and compare.

I forecast METR will find Claude 4.5 to have a 2-4h time horizon.
October 7, 2025 at 5:14 PM
5 fellowships and 10 additional roles that you can apply to in order to kick-start your AI policy career. Check them out!

=> t.ly/ai-jobs
October 6, 2025 at 8:16 PM