Marcelo Mattar
marcelomattar.bsky.social
Marcelo Mattar
@marcelomattar.bsky.social
Assistant professor at NYU.
But prediction is only half the story; we also need interpretability! We viewed tiny RNNs as dynamical systems with inputs (observations/rewards) and outputs (actions). Given their small size, we could visualize how their states evolved and discover the strategies they learned.
July 2, 2025 at 7:03 PM
Across six different reward-learning tasks, tiny RNNs consistently outperform dozens of classical cognitive models in predicting the choices of individual animals and humans. Surprisingly, networks with just 2-4 units often performed the best in those simple lab tasks.
July 2, 2025 at 7:03 PM
Our solution was to use very small RNNs, composed of 1-4 units. Those models are still great at modeling biological behavior without the need for pre-specified assumptions, but are small enough for us to interpret their mechanisms, combining the best of both worlds.
July 2, 2025 at 7:03 PM