Alisa Leshchenko 🦕
banner
borromeanlink.bsky.social
Alisa Leshchenko 🦕
@borromeanlink.bsky.social
CompNeuro @Columbia Zuckerman Institute, Fusi Lab | Cognitive maps, abstraction, compositionality in a neural substrate | #NeuroAI
Reposted by Alisa Leshchenko 🦕
October 31, 2025 at 6:15 PM
Reposted by Alisa Leshchenko 🦕
LLMs are trained to compress data by mapping sequences to high-dim representations!
How does the complexity of this mapping change across LLM training? How does it relate to the model’s capabilities? 🤔
Announcing our #NeurIPS2025 📄 that dives into this.

🧵below
#AIResearch #MachineLearning #LLM
October 31, 2025 at 4:19 PM
Reposted by Alisa Leshchenko 🦕
Excited to share a new preprint w/ @annaschapiro.bsky.social! Why are there gradients of plasticity and sparsity along the neocortex–hippocampus hierarchy? We show that brain-like organization of these properties emerges in ANNs that meta-learn layer-wise plasticity and sparsity. bit.ly/4kB1yg5
A gradient of complementary learning systems emerges through meta-learning
Long-term learning and memory in the primate brain rely on a series of hierarchically organized subsystems extending from early sensory neocortical areas to the hippocampus. The components differ in t...
bit.ly
July 16, 2025 at 4:15 PM
Reposted by Alisa Leshchenko 🦕
Connectome suggests brain’s synaptic weights follow heavy-tailed distributions, yet most analyses of RNNs assume Gaussian connectivity. 

🧵⬇️ Our @alleninstitute.org #NeurIPS2025 paper shows heavy-tailed weights can strongly affect dynamics, trade off robustness + attractor dimension.
October 30, 2025 at 2:54 PM
Reposted by Alisa Leshchenko 🦕
**Discovering network dynamics**
One more on estimating dynamics of complex systems, this time with symbolic regression
doi.org/10.1038/s435...
October 30, 2025 at 4:57 PM
Reposted by Alisa Leshchenko 🦕
Total Lunar Eclipse - 26 May 2021 - From Ángel López-Sánchez - https://flic.kr/p/2m1PLyZ
October 16, 2025 at 4:00 AM
Jorge Luis Borges
October 16, 2025 at 4:19 AM
Reposted by Alisa Leshchenko 🦕
Happy birthday Kay Sage. The artist & poet, famous for her surrealist paintings, often referencing architecture & the built environment, was born today in 1898. #surrealism
June 25, 2025 at 5:13 PM
Reposted by Alisa Leshchenko 🦕
October 9, 2025 at 3:31 PM
Reposted by Alisa Leshchenko 🦕
What do we talk about when we talk about "readout"?

I argued that our overly specialized, modular approach to studying the brain has given us a simplistic view of readout.

🧠📈
October 13, 2025 at 3:15 PM
Reposted by Alisa Leshchenko 🦕
Prominence 10-26-10 - From Jason Major (jpmajor.bsky.social) - https://flic.kr/p/8NkYgk
October 14, 2025 at 9:00 AM
Reposted by Alisa Leshchenko 🦕
ESA ROSETTA 14 July 2015 - From 2di7 & titanio44 - https://flic.kr/p/vpqn2z
October 15, 2025 at 9:00 AM
Reposted by Alisa Leshchenko 🦕
Mimas, Epimetheus and Rings - From Gordan Ugarković (ugordan.bsky.social) - https://flic.kr/p/5oPunj
October 15, 2025 at 1:00 PM
Reposted by Alisa Leshchenko 🦕
🧠🚨 How does the hippocampus transform the visual similarity space to resolve memory interference?

In this new preprint, we found that the hippocampus sequentially inverts the behaviorally relevant dimensions of similarity 🧵

www.biorxiv.org/content/10.1...
Hippocampal transformations occur along dimensions of memory interference
The role of the hippocampus in resolving memory interference has been greatly elucidated by considering the relationship between the similarity of visual stimuli (input) and corresponding similarity o...
www.biorxiv.org
October 14, 2025 at 4:48 PM
Reposted by Alisa Leshchenko 🦕
Replace Variational Autoencoder (VAE) with pretrained representation encoders (e.g., DINO, SigLIP, MAE) paired with trained decoders, which they terms as Representation Autoencoders (RAE).
October 15, 2025 at 3:49 AM
Reposted by Alisa Leshchenko 🦕
Preprint Alert 🚀

Can we simultaneously learn transformation-invariant and transformation-equivariant representations with self-supervised learning?

TL;DR Yes! This is possible via simple predictive learning & architectural inductive biases – without extra loss terms and predictors!

🧵 (1/10)
May 14, 2025 at 12:53 PM
Reposted by Alisa Leshchenko 🦕
So excited to see this preprint released from the lab into the wild.

Charlotte has developed a theory for how learning curriculum influences learning generalization.
Our theory makes straightforward neural predictions that can be tested in future experiments. (1/4)

🧠🤖 🧠📈 #MLSky
🚨 New preprint alert!

🧠🤖
We propose a theory of how learning curriculum affects generalization through neural population dimensionality. Learning curriculum is a determining factor of neural dimensionality - where you start from determines where you end up.
🧠📈

A 🧵:

tinyurl.com/yr8tawj3
The curriculum effect in visual learning: the role of readout dimensionality
Generalization of visual perceptual learning (VPL) to unseen conditions varies across tasks. Previous work suggests that training curriculum may be integral to generalization, yet a theoretical explan...
tinyurl.com
September 30, 2025 at 2:35 PM
Reposted by Alisa Leshchenko 🦕
Tomorrow the next meeting of MIT #Consciousness Club. This is a Zoom link 🔗⬇️
October 15, 2025 at 9:38 PM
Reposted by Alisa Leshchenko 🦕
October 13, 2025 at 11:54 AM
Reposted by Alisa Leshchenko 🦕
Cielo di piombo, ispettore Callaghan...
October 5, 2025 at 11:44 AM
Reposted by Alisa Leshchenko 🦕
Jour...sur la grève
October 8, 2025 at 9:14 PM
Reposted by Alisa Leshchenko 🦕
The domain of Arnheim (1962)
by René Magritte
October 13, 2025 at 9:33 AM
Reposted by Alisa Leshchenko 🦕
Kay Sage
October 13, 2025 at 9:31 AM
Reposted by Alisa Leshchenko 🦕
Misha Kovalov
October 15, 2025 at 8:03 AM