gigg44.bsky.social
@gigg44.bsky.social
January 2, 2026 at 8:44 AM
Possible scenarios for AI all pretty much confirming the AI bubble
https://infosec.exchange/@david_chisnall
David Chisnall (*Now with 50% more sarcasm!*) (@david_chisnall@infosec.exchange)
For those who are skeptical that AI is a bubble, let's look at the possible paths from the current growth: **Scenario 1: Neither training nor inference costs go down significantly.** Current GenAI offerings are *heavily* subsidised by burning investor money, when that runs out the prices will go up. Only [8% of adults in the US would pay *anything* for AI in products](https://www.zdnet.com/article/only-8-of-americans-would-pay-extra-for-ai-according-to-zdnet-aberdeen-research/), the percentage who would pay the unsubsidised cost is lower. And, as the costs go up, the number of people willing to pay goes down. The economies of scale start to erode. **End result**: Complete crash. **Scenario 2: Inference costs remain high, training costs drop.** This one is largely dependent on AI companies successfully lobbying to make plagiarism legal as long as it's 'for AI'. They've been quite successful at that so far, so there's a reasonable chance of this. In this scenario, none of the big AI companies has a moat. If training costs go down, the number of people who can afford to build foundation models goes up. This might be good for NVIDIA (you sell fewer chips per customer, to more customers, and hopefully it balances out). OpenAI and Anthropic have nothing of value, they start playing in a highly competitive market. This scenario is why DeepSeek spooked the market. If you can train something like ChatGPT for $30M, there are hundreds of companies that can do it. If you can do it for $3m, there are hundreds of companies for which this would be a rounding error in their IT budgets. Inference is still not at break even point, so costs go up, but for use cases where a 2X cost is worthwhile there's still profit. **End result**: This is a moderately good case. There will be some economic turmoil because a few hundred billion have been invested in producing foundation models on the assumption that the models and the ability to create them constitutes a moat. But companies like Amazon, Microsoft and Google will still be able to sell inference services at a profit. None will have lock in to a model, so the prices will drop to close to the cost, though still higher than they are today. With everyone actually paying, there won't be such a rush to put AI in everything. The datacenter investment is not destroyed because there's still a market for inference. The growth will likely stall though and so I expect a lot of the speculative building will be wiped out. I'd expect this to push the USA into recession, but this is more the stock market catching up with the economic realities. **Scenario 3: Inference costs drop a lot, training costs remain high.** This is the one that a lot of folks are hoping for because it means on-device inference will replace cloud services. Unfortunately, most training is done by companies that expect to recoup that investment *selling inference*. This is roughly the same problem as COTS software: you do the expensive thing (writing software / training) for free and then hope to make it up charging for the thing that doesn't cost anything (copying software / inference). We've seen that this is a precarious situation. It's easy for China to devote a load of state money to training a model and then give it away for the sole purpose of undermining the business model of a load of US companies (and this would be a good strategy for them). Without a path to recouping their investment, the only people who can afford to train models have no incentive to do so. **End result**: All of the equity sunk into building datacentres to sell inference is wasted. Probably close to a trillion dollars wiped off the stock market in the first instance. In the short term, a load of AI startups who are just wrapping OpenAI / Anthropic APIs suddenly become profitable, which may offset the losses. But new model training becomes economically infeasible. Models become increasingly stale (in programming, they insist on using deprecated / removed language features and APIs instead of their replacements. In translation they miss modern idioms and slang. In summarisation they don't work on documents written in newer structures. In search, they don't know anything about recent events. And so on). After a few years, people start noticing that AI products are terrible, but none of the vendors can afford to make them good. RAG can slow this decline a bit, but at the expense of increasingly large contexts (which push up inference compute costs). This is probably a slow deflate scenario. **Scenario 4: Inference and training costs both drop a lot.** This one is quite interesting because it destroys the moat of the existing players and also wipes out the datacenter investments, but makes it easy for new players to arise. If it's cheap to train a new model and to do the inference, then a load of SaaS things will train bespoke models and do their own inference. Open-source / cooperative groups will train their own models and be able to embed them in things. *End Result*: Wipe out a couple of trillion from the stock market and most likely cause a depression, but end up with a proliferation of foundation models in scenarios where they're actually useful (and, if the costs are low enough, in a lot of places where they aren't). The most interesting thing about this scenario is that it's the worst for the economy, but the best outcome for the proliferation of the technology. **Variations**: Costs may come down a bit, but not much. This is quite similar to the no-change scenario. Inference costs may come down but only on expensive hardware. For example, a $100,000 chip that can run inference for 10,000 users simultaneously, but which can't scale down to a $10 chip that can run the same workloads. This is interesting because it favours cloud vendors, but is otherwise somewhere between cheap and expensive inference costs. **Overall conclusion**: There are some scenarios where the outcome for the *technology* is good, but the outcomes for the *economy* and the major players is almost always bad. And the cases that are best for widespread adoption for the technology are the ones that are *worst* for the economy. And that's pretty much the definition of a bubble: A lot of money invested in ways that will result in losing the money.
infosec.exchange
November 25, 2025 at 4:33 AM
Entshitification formula explains more then just why Google is shit now
Pluralistic: The enshittification of labor (07 Nov 2025) – Pluralistic: Daily links from Cory Doctorow
November 9, 2025 at 11:56 AM
November 2, 2025 at 7:45 AM
September 25, 2025 at 8:41 PM
It depends on what you watch or read before bed. It can’t be to exciting or rewarding. One thing is sure it shouldn’t be in bed [www.nytimes.com/2025/08/1…
August 19, 2025 at 3:51 AM
🔗 Yet another LLM rant - Dennis Schubert // // overengineer.dev

This explains the difference between how our brains work and LLM’s
Yet another LLM rant - Dennis Schubert
Random thoughts, articles and projects by a chronic overengineer.
overengineer.dev
August 10, 2025 at 4:15 AM
Reposted
A few weeks, ago, I posted video of the Scheiß AfD Jodler (Shit AfD Yodellers) song ruining an interview with the fascist party head. Here's a feature about them. We need a Shit Donald Trump choir in DC.
www.theguardian.com/world/2025/a...
Choir that drowned out Germany’s AfD leader happy to ‘bend the ear’ of country
Group ‘surprised’ when their song was used to disrupt an interview with Alice Weidel – but now they face a backlash
www.theguardian.com
August 4, 2025 at 11:19 AM
overcast.fm/+AA_ztzSH…

This is one of the podcasts episodes I have listened to. To learn Google is to understand the modern internet
July 30, 2025 at 6:03 PM
July 14, 2025 at 7:18 PM
Wieso Rechtspopulismus so gut funktioniert zurzeit… es ist die vermeintliche einfache Lösung das Problem zu ignorieren und zu sagen es ist zu schwer
July 13, 2025 at 8:18 AM
Project Killswitch has arrived
June 30, 2025 at 12:48 PM
June 30, 2025 at 8:25 AM
Caught up on Ichi the Witch #Manga
June 30, 2025 at 7:30 AM
I find it interesting how Verstappen reacted like he did with Antonelli. You would expect him to be more angry. Could it be he already knows he is gonna be his new teammate? #F1
June 29, 2025 at 1:31 PM
this is a test theverge.com
June 26, 2025 at 3:41 PM
First time I have heard about surveillance pricing. One more for the dystopian playbook. 🔗 pluralistic.net
June 26, 2025 at 3:11 PM
Interesting new Manga I just found on Jump

War of the adults is an interesting spin on society. What happens when adulthood is not determined by age but by good deeds. Who decides what is good and what happens to criminals?
June 14, 2025 at 4:42 AM
Finished reading: Digitalwüste Deutschland by Prof. Dr. Michael Resch 📚
June 3, 2025 at 7:27 PM
Witcher 4 looks amazing in Unreal
June 3, 2025 at 1:50 PM
Reposted
Ja, das wär echt schön gewesen, hätte jemand etwas davon in die Verfassung geschrieben!

Wieso kennt eigentlich Friedrich Merz die Verfassung nicht? Könnte man ihn dafür abschieben?
May 17, 2025 at 6:37 AM
Reposted
“And so the Greeks send me this horse, we’re talking about one of the most beautiful horses you’ve ever seen. So big. So strong. Normally they keep this kind of horse for themselves but they were such big fans they said sir, please take our big wonderful horse we’ll even bring it to your house”
May 12, 2025 at 12:03 PM