George E. Dahl
georgeedahl.bsky.social
George E. Dahl
@georgeedahl.bsky.social
Machine learning researcher
@Google DeepMind. My opinions do not necessarily represent my employer. Prefer email over DMs.
https://scholar.google.com/citations?hl=e&user=ghbWy-0AAAAJ
https://www.cs.toronto.edu/~gdahl/
Reposted by George E. Dahl
We just released AlgoPerf v0.6! 🎉
✅ Rolling leaderboard
✅ Lower compute costs
✅ JAX jit migration
✅ Bug fixes & flexible API
Coming soon: More contemporary baselines + an LM workload…
github.com/mlcommons/al...
GitHub - mlcommons/algorithmic-efficiency: MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training a...
MLCommons Algorithmic Efficiency is a benchmark and competition measuring neural network training speedups due to algorithmic improvements in both training algorithms and models. - mlcommons/algori...
github.com
September 8, 2025 at 1:29 PM
My team is hiring a research scientist or engineer to work on methodological deep learning research! We study how to improve the deep learning "workflow" (developers.google.com/machine-lear...) with a special emphasis on training algorithms and recipes job-boards.greenhouse.io/deepmind/job...
Research Engineer/Scientist, Training Algorithms
Mountain View, California, US
job-boards.greenhouse.io
July 18, 2025 at 8:45 PM
Reposted by George E. Dahl
The explainer video: www.youtube.com/watch?v=_yX1...
ICLR 2025: Accelerating Neural Network Training (AlgoPerf)
YouTube video by Tübingen Machine Learning
www.youtube.com
April 3, 2025 at 11:15 AM
Reposted by George E. Dahl
We're all about acceleration! 😉
Watch @priya-kasimbeg.bsky.social & @fsschneider.bsky.social speedrun an explanation of the AlgoPerf benchmark, rules, and results all within a tight 5 minutes for our #ICLR2025 paper video on "Accelerating Neural Network Training". See you in Singapore!
April 3, 2025 at 11:15 AM
Reposted by George E. Dahl
Hi there! This account will post about the AlgoPerf benchmark and leaderboard updates for faster neural network training via better training algorithms. But let's start with what AlgoPerf is, what we have done so far, and how you can train neural nets ~30% faster.
March 14, 2025 at 8:57 PM
Reposted by George E. Dahl
Making LLMs run efficiently can feel scary, but scaling isn’t magic, it’s math! We wanted to demystify the “systems view” of LLMs and wrote a little textbook called “How To Scale Your Model” which we’re releasing today. 1/n
February 4, 2025 at 6:54 PM