Alessio Quaresima
banner
bluedebris.bsky.social
Alessio Quaresima
@bluedebris.bsky.social
Researcher in computational neuroscience.
Interested in memory and perception!

Working with a great team @BathellierLab, Institut de l'Audition, Paris.
Personal page: https://aquaresima.github.io
Julia Spiking Neural Networks: https://bit.ly/4mAbRly
Sure, in the end, everything is an RNN ;)

Mongillo's model indeed requires pre-formed attractors. However, that model opens up to other sub-threshold memory mechanisms that can maintain STM when LTMs are not present, yet.
October 10, 2025 at 1:55 PM
So maybe if an RNN has memory that scales up with the size, it is a better model of activated LTM rather than of STM. No?
October 10, 2025 at 8:46 AM
4-7 *new* items! If you learn to use a retrieval structure, you go pretty big - as all those crazy memorists do!

online WM of novel items require variable binding, and I doubt you can get it in RNNs.

Check this out for a cogsci perspective
pmc.ncbi.nlm.nih.gov/articles/PMC...
Short-Term Memory and Long-Term Memory are Still Different
A commonly expressed view is that short-term memory (STM) is nothing more than activated long-term memory. If true, this would overturn a central tenet of cognitive psychology—the idea that there are ...
pmc.ncbi.nlm.nih.gov
October 9, 2025 at 3:53 PM
IMHO. Because it does not have "memory slots" ready for it, if it has memory slots, it lacks the machinery to convert them back into sensory representations. But you're the RNN expert, if you say it, I believe you ;)
October 9, 2025 at 9:53 AM
Can it distinguish a shape it saw once from one it never saw? And despite visual distractors?
October 9, 2025 at 9:19 AM
Maybe we do have five stable attractors in pfc and use them to re-cue sensory perceptions. But we need to be clear about active/silent interactions to move forward in this direction.
October 9, 2025 at 8:56 AM
True. However, the attractor `iff` pers. activity is the case only if we consider the firing rate as the only variable at play in the brain. Any cognitive theory of WM requires the allocation of new items on the fly. Can we build attractors through single-shot learning? Or we repurpose the same?
October 9, 2025 at 8:53 AM
True! However, we have learned about long-term memory formation and maintenance from snails. Divergent evolutionary paths can yet implement similar computational mechanisms. IMHO, recurrent activity is not as prominent in monkey's PFC as in fruitfly MB, but the comparison holds.
October 9, 2025 at 8:44 AM
If you're still updating the starter pack, add me in ;)
October 7, 2025 at 8:33 AM
I want to be the guy on the left. libera nos domine de AI
October 7, 2025 at 8:29 AM
In the image I see a mid age person with a wise face and - probably - mastering a complex software Vs a youngster with a naive smile clicking buttons on an anonymous interface while the world burns
October 7, 2025 at 8:29 AM
Then send a link to the blog ;)
October 6, 2025 at 3:30 PM
And concluding with a beautiful thermodynamics of learning:

The correct learning rate depends on the noise of the training set (sigma eta)!
September 30, 2025 at 2:11 PM
And reminding the great neuroscientist **and middle east pacifist** Daniel Amit!
September 30, 2025 at 1:57 PM
The library strikes a balance between creating new models and composing them in fairly complex networks.

If you would like some specific feat and cannot achieve it, just contact me and we'll find a way together!
September 26, 2025 at 2:01 PM
Where do they get all this money to infest our market anyway???
September 17, 2025 at 11:36 AM
And yess! The students loved it :)
September 16, 2025 at 3:35 PM
Thank you! I wanted to feel better this mornin 🤣
September 12, 2025 at 7:36 AM