Andreas Kirsch
banner
blackhc.bsky.social
Andreas Kirsch
@blackhc.bsky.social
My opinions only here.

👨‍🔬 RS DeepMind

Past:
👨‍🔬 R Midjourney 1y 🧑‍🎓 DPhil AIMS Uni of Oxford 4.5y
🧙‍♂️ RE DeepMind 1y 📺 SWE Google 3y 🎓 TUM
👤 @nwspk
100000t of bombs vs the civilian casualty rates in Gaza actually shows that the IDF went out of their way to protect the civilian population

Compare this to, I dunno, the Dresden bomb raids:

25000 deaths over 4 days from just 3900t of bombs

Was that genocide?

en.m.wikipedia.org/wiki/Bombing...
Bombing of Dresden - Wikipedia
en.m.wikipedia.org
June 29, 2025 at 8:25 AM
Lol this fake statistician has blocked me for calling out his stupidity 😅
June 29, 2025 at 8:20 AM
Because the urban warfare during the counterinsurgency, 60% of the buildings were destroyed during fighting in the course of a couple of months. This is in line with the report you cited above.

How is this war in Gaza different to that war?

en.m.wikipedia.org/wiki/Falluja...
June 29, 2025 at 8:19 AM
Did the US also commit genocide in Iraq?
June 28, 2025 at 9:56 PM
The Gedankenexperiment says they must be doing a pretty terrible job if this is supposed to be a genocide 🙄
June 28, 2025 at 8:57 PM
x.com/BlackHC/stat...

Some thoughts on the comment paper
June 13, 2025 at 11:35 AM
Oh yeah I can't believe ai generated ASMR is also taking off. I've seen one or two of those!
June 11, 2025 at 9:09 PM
Yeah but that's where most of the interesting things are 😅
June 11, 2025 at 4:43 PM
I hope the authors (I QT'ed @MFarajtabar above) can revisit the Tower of Hanoi results and examine the confounders to strengthen the paper (or just drop ToH). This will help keep the focus on the more interesting other environments for which the claims in the paper seem valid 🙏
June 9, 2025 at 9:49 PM
And other influencers exaggerate its results massively:

x.com/RubenHssd/s...
June 9, 2025 at 9:49 PM
So thinks are not as bleak as the coverage makes them sound. Aprospos coverage: here are some glowing reviews of the paper that do not question it:

Of course, @GaryMarcus likes it very much:

garymarcus.substack.com/p/a-knockou...
A knockout blow for LLMs?
LLM “reasoning” is so cooked they turned my name into a verb
garymarcus.substack.com
June 9, 2025 at 9:49 PM
I want to point to one more claim which is already outdated (the relevant paper was only published a few days ago so hardly anyone's fault):

The ProRL paper by Nvidia has shown that RL-based models can truly learn new things - if you run RL long enough!

arxiv.org/abs/2505.24864
June 9, 2025 at 9:49 PM
But as Gemini 2.5 Pro explains, River Crossing's optimal solutions are rather short, but they have a high branching factor and number of possible states with dead-ends.

That models fail here is a lot more interesting and points towards areas of improvements.
June 9, 2025 at 9:49 PM
All in all, the Tower of Hanoi results cannot be given any credence because it seems there are many confounders and simpler explanations.

However, I don't think the other games hit the same issues. If we look at River Crossing it seems to hit high token counts very quickly:
June 9, 2025 at 9:49 PM
Thus, @scaling01 calls out their conclusion: it looks like they didn't pay as close attention to the model's reasoning traces as they have claimed 😬

x.com/scaling01/s...
June 9, 2025 at 9:49 PM
This can also be observed in other simpler non-reasoning tasks:

x.com/Afinetheore...
June 9, 2025 at 9:49 PM
New LRMs are actually trained to be aware of their token limits. If they cannot think there way through, they find other solutions.

Apparently, the models say so at N=9 in ToH or refuse the write out the whole solution "by hand" and write code instead:

x.com/scaling01/s...
June 9, 2025 at 9:49 PM
Another "counterintuitive" behavior that is reported that the models start outputting fewer thinking tokens as the problems get more complex (e.g., more disks in Tower of Hanoi).

But again, this can be explained away sadly for ToH (but only ToH!).

x.com/MFarajtabar...
June 9, 2025 at 9:49 PM
Because, wait, it gets worse:

For N >= 12 or 13 disks, the LRM could not even output all the moves for the solution even if it wanted because the models can only output 64k tokens. Half of the plots of the ToH are essentially meaningless anyway.

x.com/scaling01/s...
June 9, 2025 at 9:49 PM
This is very important: we do not need the LRM to be bad at reasoning for this to happen.

It just happens because the correct solution is so long and is not allowed to contain any typos.

(IMO the paper should really consider dropping the ToH environment from their results.)
June 9, 2025 at 9:49 PM
It assumes the LLM samples the correct token with a given (high) probability and looks at the required number of tokens we need to output all the moves. Then:

p("all correct") = p**(2^N - 1)

and p=0.999 matches the ToH results in the paper above.
June 9, 2025 at 9:49 PM
We use top-p or top-k sampling with a temperature of 0.7 after all. Thus, there is a chance the model gets unlucky and the wrong token is sampled, resulting in failure. (They should really try min-p 😀)

@scaling01 has a nice toy model that matches the paper results:
June 9, 2025 at 9:49 PM