Using 🥚 to finetune RWKV-7 language models outperforms GRPO on Countdown and GSM8K ❗
🥚significantly outperformed GRPO on the Countdown task, achieving a 35% validation accuracy compared to GRPO's 23%❗
Using 🥚 to finetune RWKV-7 language models outperforms GRPO on Countdown and GSM8K ❗
🥚significantly outperformed GRPO on the Countdown task, achieving a 35% validation accuracy compared to GRPO's 23%❗
🥚 is competitive with, and in many cases, better than OpenES performance, even before considering the vast speed-up!
🥚 matched OpenES on 7/16 environments and outperformed it on another 7/16
🥚's low-rank approach does not compromise ES performance
🥚 is competitive with, and in many cases, better than OpenES performance, even before considering the vast speed-up!
🥚 matched OpenES on 7/16 environments and outperformed it on another 7/16
🥚's low-rank approach does not compromise ES performance
🥚 speed nearly reaches the throughput of pure batch inference, leaving OpenES far behind
🥚 reaches 91% of pure batch inference speed vs. OpenES reaching only 0.41%
🥚 speed nearly reaches the throughput of pure batch inference, leaving OpenES far behind
🥚 reaches 91% of pure batch inference speed vs. OpenES reaching only 0.41%
🧠🛠️ We replace full-rank perturbations with low-rank ones. Each update is still high rank, maintaining expressivity with faster training
🥚 EGGROLL converges to the full-rank update at a fast rate of 1/rank. The method is effective even with a rank of 1
🧠🛠️ We replace full-rank perturbations with low-rank ones. Each update is still high rank, maintaining expressivity with faster training
🥚 EGGROLL converges to the full-rank update at a fast rate of 1/rank. The method is effective even with a rank of 1
2 (🐔🐔) orders of magnitude larger than prior ES works❗
2 (🐔🐔) orders of magnitude larger than prior ES works❗
⚡100x Training Throughput
🎯Fast Convergence
🔢Pure Int8 Pretraining of RNN LLMs
⚡100x Training Throughput
🎯Fast Convergence
🔢Pure Int8 Pretraining of RNN LLMs