Pavel Tolmachev
banner
pawa-pawa.bsky.social
Pavel Tolmachev
@pawa-pawa.bsky.social
Researcher in Computational Neuroscience at @PrincetonNeuro,
with focus on neural representations, RNNs and reinforcement learning
Finally, these distinctions are symptomatic of deeper differences: Tanh and ReLU/sigmoid RNNs discover distinct circuit solutions to context-dependent decision-making task. Differences in circuitry become critical when RNNs are exposed to novel stimuli outside the training range.
October 24, 2025 at 7:14 PM
We further show that RNNs with different activation functions exhibit distinct dynamics, as characterized by the configuration of fixed points and trajectory end points, with tanh RNNs consistently displaying significant divergence from ReLU and sigmoid ones.
October 24, 2025 at 7:14 PM
The choice of activation function in RNNs is often assumed to minimally affect its trajectories. We analyzed ReLU, sigmoid, and tanh RNNs on diverse tasks, revealing differences in their neural trajectories and individual neuron responses, challenging this assumption.
October 24, 2025 at 7:14 PM