Marvin Baumann
banner
marvintbaumann.bsky.social
Marvin Baumann
@marvintbaumann.bsky.social
Bachelor Physics & AI, Master Economics.
Passion for Geopolitics, Existentialism, Psychology, & VC.
I care about growing European strength, wealth and beauty.
Berlin, Europe.
10/ Further out other domains of intelligence like robots & autonomous vehicles, biology, pharma, tabular data, math, and so on could again leave to another inflection in growth rates even as initial LLM-based AI matures with sigmoidally flattening growth curves.
December 7, 2024 at 12:45 PM
9/ Thus from a technological perspective on AI, from a demand and SaaS customer perspective, as well as from a semiconductor supply chain perspective, there is no apparent roadblock for cumulative annual growth rates of AI in the 50-100% range for at least the next 3 years.
December 7, 2024 at 12:45 PM
8/ Recent results by Salesforce and others are early indications and hard data, that AI profits are about to reach the application layer and is delivering true value.
December 7, 2024 at 12:45 PM
7/ TSMC recently notified their upstream supply chain to increase capacity by as much as 200% in the next three years, while downstream we still see scaling and benefits from scaling AI in the domains of pre-training, synthetic data and inference time compute.
December 7, 2024 at 12:45 PM
6/ I'd expect another doubling of CoWoS capacity in 2026 with increasingly more packaging located in the US as well.

To further grow chips, the industry has to move away from circular wafers to panels, which requires much cooperation and I don't understand enough to estimate whether thats likely.
December 7, 2024 at 12:45 PM
5/ On the packaging front, the trend of ever larger chips is apparent as economically shrinking transistors becomes harder (high NA EUV, etc).

TSMC will ramp as much as humanly possible here, I believe we can get to ~100k wpm by EOY25, roughly 50/50 CoWoS-S vs L, with L outpacing S in 2026.
December 7, 2024 at 12:45 PM
4/ Otherwise, Jensen will ship Rubin with 12-layer HBM4 and new architecture beginning Q2/Q3 CY2026, with HBM4 16-layer coming early 2027.
December 7, 2024 at 12:45 PM
3/ SK Hynix & Micron are rushing to bring HBM4 to market, and certify 16 layer HBM too. If HBM4 is 2+ quarters earlier than the rest of Rubin development takes, I could imagine NVDA even bringing out an intermediary B400 chip, 12 layer HBM4 with Blackwell architecture.
December 7, 2024 at 12:45 PM
2/ B300A is a single-die Blackwell chip with half the memory stacks, but likely 12 layer HBM3e as well. This will be a cheaper, lower powered chip, which crucially can be packaged using CoWoS-S, same as Hopper, instead of scarce CoWoS-L for Blackwell.
December 7, 2024 at 12:45 PM