Ábel Ságodi
Ábel Ságodi
@neurabel.bsky.social
PhD student at the Champalimaud Centre for the Unknown
https://asagodi.github.io/
(TL;DR) If you are modeling neural computation over long time scales, "Fading Memory" is a fundamental limitation.

We provide the theoretical framework to ensure your model can capture the memories, decisions, and rhythms that actually matter.

#NeuroAI #DynamicalSystems #NeuralODEs (6/6)
February 12, 2026 at 3:55 PM
Subsequently, we can guarantee a temporal generalization error bound for a given precision and reliability.

(5/6)
February 12, 2026 at 3:55 PM
Why not just "train longer"? You hit topological walls.

We identify three specific failure modes for infinite-time dynamics:

1️⃣ B-type: Tiny errors near a decision boundary switch the outcome.
2️⃣ P-type: Oscillations drift out of phase.
3️⃣ D-type: Continuous attractors break into points.

(3/6)
February 12, 2026 at 3:55 PM
Decision making & working memory require multistability—distinct basins of attraction to hold a choice or a continuous value.

If your model has Fading Memory (like liquid state machines), it must eventually drift back to a global baseline. It literally cannot hold a memory forever.

(2/6)
February 12, 2026 at 3:55 PM
Can we guarantee the behavior of an RNN to generalize well for infinite time? 🧠♾️
Similar to universal approximation theorems in deep nets, for systems that forget everything eventually, there are guarantees. We prove it for multistable systems!
arxiv.org/abs/2602.08640 w/ @memming.bsky.social
(1/6)
February 12, 2026 at 3:55 PM