AlgoPerf
algoperf.bsky.social
AlgoPerf
@algoperf.bsky.social
AlgoPerf benchmark for faster neural network training via better training algorithms
The explainer video: www.youtube.com/watch?v=_yX1...
ICLR 2025: Accelerating Neural Network Training (AlgoPerf)
YouTube video by Tübingen Machine Learning
www.youtube.com
April 3, 2025 at 11:15 AM
More details:
📄 ICLR 2025 results paper: openreview.net/pdf?id=CtM5x...
📄 Benchmark paper: arxiv.org/abs/2306.07179
💻 AlgoPerf codebase: github.com/mlcommons/al...
openreview.net
March 14, 2025 at 8:57 PM
This is just the beginning! With the help of the community, we want to test even more training recipes, push the SOTA for neural network training methods even further, and improve the AlgoPerf benchmark. Follow us & join the working group (mlcommons.org/working-grou...).
Algorithms - MLCommons
The Algorithms working group creates a set of rigorous and relevant benchmarks to measure neural network training speedups due to algorithmic improvements.
mlcommons.org
March 14, 2025 at 8:57 PM
And the winner in the self-tuning ruleset, based on Schedule Free AdamW, demonstrated a new level of effectiveness for completely hyperparameter-free neural network training. Roughly ~10% faster training, compared to a NadamW baseline with well-tuned default hyperparameters.
March 14, 2025 at 8:57 PM
Then, we asked the community to submit training algorithms. The results? The winner of the external tuning ruleset, using Distributed Shampoo, reduced training time by ~30% over our well-tuned baseline—showing that non-diagonal methods can beat Adam, even in wall-clock time!
March 14, 2025 at 8:57 PM
(3) Training algorithms must perform across 8 realistic deep learning workloads (ResNet-50, Conformer, ViT, etc.). (4) Submissions compete on the runtime to reach a given performance threshold. (5) Hyperparameter tuning is explicitly accounted for with our tuning rulesets.
March 14, 2025 at 8:57 PM
Over the past years, we've built the AlgoPerf: Training Algorithms benchmark. The core ideas: (1) Only the training algorithm changes; everything else (hardware, model, data) stays fixed. (2) Submissions compete directly; no more weak baselines!
March 14, 2025 at 8:57 PM
These choices are often critical, but reliable empirical guidance is scarce. Instead, we rely on expert intuition, anecdotal evidence, and babysitting. Check out this learning rate schedule from the OPT paper, which was manually determined. There has to be a better way!
March 14, 2025 at 8:57 PM
Currently, training neural nets is a complicated & fragile process with many important choices: How should I set/tune the learning rate? Using what schedule? Should I use SGD or Adam (or maybe Nadam/Amos/Shampoo/SOAP/Muon/... the list is virtually endless)?
March 14, 2025 at 8:57 PM