Andreas Hochlehnert
banner
ahochlehnert.bsky.social
Andreas Hochlehnert
@ahochlehnert.bsky.social
PhD student in ML at Tübingen AI Center & International Max-Planck Research School for Intelligent Systems
🖐️
September 5, 2025 at 6:20 PM
7/ Takeaway?

Many supposed gains don’t hold up under scrutiny.
Progress is possible—but let’s build on reproducible foundations.

🧠 Full paper: arxiv.org/abs/2504.07086

🧑‍🔬 By: @hrdkbhatnagar.bsky.social @vishaalurao.bsky.social @samuelalbanie.bsky.social @bayesiankitten.bsky.social @MatthiasBethge
A Sober Look at Progress in Language Model Reasoning: Pitfalls and Paths to Reproducibility
Reasoning has emerged as the next major frontier for language models (LMs), with rapid advances from both academic and industrial labs. However, this progress often outpaces methodological rigor, with...
arxiv.org
April 10, 2025 at 3:42 PM
6/ Our recommendations: – Evaluate with ≥10 seeds

– Tune decoding per model
– Use appropriate prompts/templates
– Standardize hardware/software (we use Docker)
– Open-source everything

📦 Code, prompts, outputs: github.com/bethgelab/so...
GitHub - bethgelab/sober-reasoning
Contribute to bethgelab/sober-reasoning development by creating an account on GitHub.
github.com
April 10, 2025 at 3:38 PM
5/ What actually works?
🔹 RL methods over distillations? Often negligible gains, prone to overfitting.

🔹 Supervised finetuning (SFT) on reasoning traces? Stable & generalizable.
April 10, 2025 at 3:38 PM
4/ Variance is everywhere:

– Random seed: swings Pass@1 by 5–15pp
– Temperature/top-p: another ±10pp
– Software & Hardware? Yes, even that changes scores

🎯 Single-seed results on small datasets are essentially noise.
April 10, 2025 at 3:37 PM
3/ We re-evaluated recent 1.5B and 7B reasoning models on 6 benchmarks under controlled settings.

➡️ Performance dropped by up to 17%
➡️ Improvements fall within variance range of the base model
➡️ Some models don’t beat the baseline!
April 10, 2025 at 3:37 PM
2/ Reasoning is the next frontier for LMs—but current evaluation practices often lack rigor.

We find that many celebrated gains from RL methods vanish once you:

✅ average over multiple seeds
✅ control decoding
✅ standardize prompt & infra
April 10, 2025 at 3:36 PM
We are just getting started! We're building better filters, aggregating released benchmarks — datacomp style — and develop fast, accurate OpenThinking models. Stay tuned! w/
@hrdkbhatnagar.bsky.social, @vishaalurao.bsky.social, @bayesiankitten.bsky.social, Matthias Bethge [6/6]
February 17, 2025 at 6:27 PM
These issues encourage shortcuts and flawed reasoning. If GRPO rewards bad logic, models reinforce errors instead of improving. Garbage In, Garbage Out 🚨 [5/6]
February 17, 2025 at 6:26 PM
🔸 Some questions reference figures that aren't included! Text-only models can't infer missing visuals. [4/6]
February 17, 2025 at 6:25 PM
🔸 Mathematical proofs are a challenge. There's no automated way to verify them, and answers often only show an initial equation, leading to unreliable training signals. [3/6]
February 17, 2025 at 6:25 PM
Blog (For Updates): huggingface.co/datasets/bet...

🔸 Some questions contain subquestions, but only one answer is labeled. The model may get penalized for "wrong" but valid reasoning. [2/6]
February 17, 2025 at 6:24 PM
December 13, 2024 at 4:20 PM