Peter Wildeford
banner
peterwildeford.bsky.social
Peter Wildeford
@peterwildeford.bsky.social
Globally ranked top 20 forecaster 🎯

AI is not a normal technology. I'm working at the Institute for AI Policy and Strategy (IAPS) to shape AI for global prosperity and human freedom.
Pinned
Everyone's calling AI a bubble. Even Sam Altman. But they're still investing hundreds of billions. What's actually going on? My new blog post explores.

peterwildeford.substack.com/p/ai-is-prob...
AI is probably not a bubble
AI companies have revenue, demand, and paths to immense value
peterwildeford.substack.com
A Chinese state-sponsored threat actor jailbroke Claude into doing real-world cyberattacks.

The AI completed roughly 80–90% of the campaign autonomously, with human operators stepping in only for about 4–6 key decision points.

www.anthropic.com/news/disrupt...
Disrupting the first reported AI-orchestrated cyber espionage campaign
A report describing an a highly sophisticated AI-led cyberattack
www.anthropic.com
November 13, 2025 at 6:32 PM
I'm interested to follow AI progress on Arc-AGI-3
November 13, 2025 at 6:26 PM
Benchmarking Chinese models is difficult.

It seems hard to balance "Chinese company overclaims their benchmark scores, need independent testing to verify" and "Independent benchmarkers can't set up the model well".
November 12, 2025 at 8:34 PM
"Obviously, no one should deploy superintelligence without being able to align and control them"

Great for OpenAI to say this! And it is obvious.

But forgive me for being concerned about OpenAI's track record of doing things they say is "obvious".

Accountability will be key.
November 10, 2025 at 4:38 PM
9 months and 8 days later, my blog has hit over 5000 subscribers 🎉

Thanks to everyone who's been reading - I hope it's been helpful!
November 5, 2025 at 3:46 PM
Both Anthropic and OpenAI are making bold statements about automating science within three years.

My independent assessment is that these timelines are too aggressive - but within 4-20 years is likely (90%CI).

We should pay attention to these statements. What if they're right?
November 2, 2025 at 7:11 AM
Everyone's calling AI a bubble. Even Sam Altman. But they're still investing hundreds of billions. What's actually going on? My new blog post explores.

peterwildeford.substack.com/p/ai-is-prob...
AI is probably not a bubble
AI companies have revenue, demand, and paths to immense value
peterwildeford.substack.com
October 29, 2025 at 5:17 PM
There's some uncertainty, but the picture is clear.

The hype crowd was wrong. We're not getting AGI in 2027.

But the progress halt crowd is also wrong. The evals are continuing on trend, as they have all year.

This is not what AI hitting a wall looks like:
October 15, 2025 at 2:58 PM
Back in 2024 February we all made fun of Altman for wanting 7 trillion

...but that was just foreshadowing his recently announced mega infrastructure plans.

Altman's plan is for 250GW by 2033, that will cost at least 7 trillion... we're not laughing now.
October 14, 2025 at 5:54 PM
My last link shortener died, so here's the updated version! Check out and get involved in AI policy!

bit.ly/ai-job-list
October 7, 2025 at 10:56 PM
There's a narrative that GPT5 has proven the end of scaling. This is false.

Claude 4.5 gives us another opportunity to see how AI trends are holding up. We can project current trends and compare.

I forecast METR will find Claude 4.5 to have a 2-4h time horizon.
October 7, 2025 at 5:14 PM
5 fellowships and 10 additional roles that you can apply to in order to kick-start your AI policy career. Check them out!

=> t.ly/ai-jobs
October 6, 2025 at 8:16 PM
Three quick notes on Claude Sonnet 4.5:

1. Having a separate Opus 4.1 (but no Sonnet 4.1) and Sonnet 4.5 (but no Opus 4.5) is really something
September 29, 2025 at 6:08 PM
What does the recent $100B NVIDIA deal mean for AI?

OpenAI, NVIDIA, and Oracle created a $400B+ circular financing scheme that makes 25% of the S&P 500 a bet on AGI.

The math only works if they're right about AI scaling. And it might actually work.

peterwildeford.substack.com/p/openai-nvi...
OpenAI, NVIDIA, and Oracle: Breaking Down $100B Bets on AGI
How vendor financing turns the S&P 500 into a giant AGI bet
peterwildeford.substack.com
September 25, 2025 at 8:06 PM
Don’t let nuance lead you to miss the bigger picture -- even if Yudkowsky and Soares are overconfident, there still are serious dangers from scaling AI to superintelligence.

The real overconfidence that matters most is the overconfidence of the AI companies.
September 19, 2025 at 6:43 PM
"If Anyone Builds It, Everyone Dies". It's a shocking headline. How well does it hold up? Today I review.

peterwildeford.substack.com/p/if-we-buil...
If We Build AI Superintelligence, Do We All Die?
If you're not at least a little doomy about AI, you're not paying attention
peterwildeford.substack.com
September 18, 2025 at 1:57 PM
I did an interview with The Oracle where I talk about why all your hot takes are probably wrong! Check it out! news.polymarket.com/p/nothing-ev...
🔮 #3 Top Forecaster: "Nothing Ever Happens"
Inside Peter Wildeford’s award-winning forecasting strategy
news.polymarket.com
September 10, 2025 at 1:52 PM
AI is not a normal technology.

Normal tech doesn't deceive their operators.
Normal tech doesn't autonomously blackmail people.
Normal tech doesn't refuse to go back into the toolbox.
Normal tech doesn't develop goals you never gave them.

There's little normal about AI.
September 9, 2025 at 7:31 PM
Trump is right
Trump: You have some vaccines that are so amazing. The polio vaccine I think is amazing. A lot of people think that covid is amazing. You know. I think you have to be very careful when you say that some people don't have to be vaccinated.
September 6, 2025 at 3:57 PM
AI agents can be hijacked to spread like computer viruses. Prompt injection can form AI-driven malware.

In this demonstration, AgentHopper exploits flaws in GitHub Copilot and infects repositories, jumps between coding agents, and spreads automatically through GitHub commits.
AgentHopper: An AI Virus · Embrace The Red
AgentHopper: A proof-of-concept AI Virus
embracethered.com
September 6, 2025 at 3:42 PM
AI chips are potentially the most complex objects humans have ever made.

They take hundreds of steps to fabricate and only can be constructed by the shared knowledge across a handful of companies on Earth.

Today, Erich explains on the blog how chips are made.
Explainer: How AI Chips Are Made
It's complicated
peterwildeford.substack.com
September 5, 2025 at 4:42 PM
Compute is the new oil. It's cliché but true, especially geopolitically.

Like oil, compute is:
- Scarce (demand far exceeds supply)
- Concentrated (US: 75%, China: 15%, EU: 5%)
- Controllable (export controls actually work)
- Strategically vital (biggest AI bottleneck)
Compute is a strategic resource
Computing power still determines who wins the AI race
peterwildeford.substack.com
September 1, 2025 at 8:34 PM
"Agentic AI has been weaponized. AI models are now being used to perform sophisticated cyberattacks, not just advise on how to carry them out."

From Anthropic’s latest AI Threat Intelligence Report
August 28, 2025 at 1:53 AM
GPT-5 is here.

It’s not a giant leap in intelligence. But for 98% of users, it’s still the best ChatGPT yet.

What does this tell us about the future of AI? In today's blog, I dig in.

peterwildeford.substack.com/p/gpt-5-a-sm...
GPT-5: a small step for intelligence, a giant leap for normal people
GPT-5 focuses on where the money is - everyday users, not AI elites
peterwildeford.substack.com
August 8, 2025 at 1:17 PM
Reposted by Peter Wildeford
they should be taken to The Vague for this post crime
August 7, 2025 at 7:08 PM