bidiptas13.bsky.social
@bidiptas13.bsky.social
Scaling LLM Reasoning with EGGROLL 🥚🧠📝

Using 🥚 to finetune RWKV-7 language models outperforms GRPO on Countdown and GSM8K ❗

🥚significantly outperformed GRPO on the Countdown task, achieving a 35% validation accuracy compared to GRPO's 23%❗
November 21, 2025 at 5:56 PM
EGGROLL 🥚for RL 🎮🤖

🥚 is competitive with, and in many cases, better than OpenES performance, even before considering the vast speed-up!

🥚 matched OpenES on 7/16 environments and outperformed it on another 7/16

🥚's low-rank approach does not compromise ES performance
November 21, 2025 at 5:56 PM
🥚EGGROLLing in the Deep with🚀 💯✕ Speedup

🥚 speed nearly reaches the throughput of pure batch inference, leaving OpenES far behind

🥚 reaches 91% of pure batch inference speed vs. OpenES reaching only 0.41%
November 21, 2025 at 5:56 PM
The EGGROLL Recipe
🧠🛠️ We replace full-rank perturbations with low-rank ones. Each update is still high rank, maintaining expressivity with faster training

🥚 EGGROLL converges to the full-rank update at a fast rate of 1/rank. The method is effective even with a rank of 1
November 21, 2025 at 5:56 PM
We use EGGROLL 🥚to train RNN language models from scratch using only integer datatypes (and no activation functions!), scaling population size from 64 to 262144

2 (🐔🐔) orders of magnitude larger than prior ES works❗
November 21, 2025 at 5:56 PM
Introducing 🥚EGGROLL 🥚(Evolution Guided General Optimization via Low-rank Learning)! 🚀 Scaling backprop-free Evolution Strategies (ES) for billion-parameter models at large population sizes

⚡100x Training Throughput
🎯Fast Convergence
🔢Pure Int8 Pretraining of RNN LLMs
November 21, 2025 at 5:56 PM