bidiptas13.bsky.social
@bidiptas13.bsky.social
Evolve at the hyperscale!
Work co-led with Mattie Fellows and Juan Agustin Duque.
Made possible by #Isambard and AIRR

🌐 Website: eshyperscale.github.io
📝 Paper: alphaxiv.org/abs/2511.16652
💻 Code: github.com/ESHyperscale...
🥚NanoEgg : github.com/ESHyperscale... (train in int 😉)
Evolution Strategies at the Hyperscale
General ML Training Made as Fast and Easy as Inference
eshyperscale.github.io
November 21, 2025 at 5:56 PM
Scaling LLM Reasoning with EGGROLL 🥚🧠📝

Using 🥚 to finetune RWKV-7 language models outperforms GRPO on Countdown and GSM8K ❗

🥚significantly outperformed GRPO on the Countdown task, achieving a 35% validation accuracy compared to GRPO's 23%❗
November 21, 2025 at 5:56 PM
EGGROLL 🥚for RL 🎮🤖

🥚 is competitive with, and in many cases, better than OpenES performance, even before considering the vast speed-up!

🥚 matched OpenES on 7/16 environments and outperformed it on another 7/16

🥚's low-rank approach does not compromise ES performance
November 21, 2025 at 5:56 PM
🥚EGGROLLing in the Deep with🚀 💯✕ Speedup

🥚 speed nearly reaches the throughput of pure batch inference, leaving OpenES far behind

🥚 reaches 91% of pure batch inference speed vs. OpenES reaching only 0.41%
November 21, 2025 at 5:56 PM
The EGGROLL Recipe
🧠🛠️ We replace full-rank perturbations with low-rank ones. Each update is still high rank, maintaining expressivity with faster training

🥚 EGGROLL converges to the full-rank update at a fast rate of 1/rank. The method is effective even with a rank of 1
November 21, 2025 at 5:56 PM
We use EGGROLL 🥚to train RNN language models from scratch using only integer datatypes (and no activation functions!), scaling population size from 64 to 262144

2 (🐔🐔) orders of magnitude larger than prior ES works❗
November 21, 2025 at 5:56 PM