Andrew Lampinen
banner
lampinen.bsky.social
Andrew Lampinen
@lampinen.bsky.social
Interested in cognition and artificial intelligence. Research Scientist at Google DeepMind. Previously cognitive science at Stanford. Posts are mine.
lampinen.github.io
Pinned
Why does AI sometimes fail to generalize, and what might help? In a new paper (arxiv.org/abs/2509.16189), we highlight the latent learning gap — which unifies findings from language modeling to agent navigation — and suggest that episodic memory complements parametric learning to bridge it. Thread:
Latent learning: episodic memory complements parametric learning by enabling flexible reuse of experiences
When do machine learning systems fail to generalize, and what mechanisms could improve their generalization? Here, we draw inspiration from cognitive science to argue that one weakness of machine lear...
arxiv.org
Reposted by Andrew Lampinen
I wrote a short article on AI Model Evaluation for the Open Encyclopedia of Cognitive Science 📕👇

Hope this is helpful for anyone who wants a super broad, beginner-friendly intro to the topic!

Thanks @mcxfrank.bsky.social and @asifamajid.bsky.social for this amazing initiative!
February 12, 2026 at 10:22 PM
Reposted by Andrew Lampinen
This work by @mariaeckstein.bsky.social et al is a nice example of how progress in psychology can be expedited with machine learning.

How long before this type of approach is expected for models-of-behavior papers? My guess: not long. (If you are a trainee, nudge!)

www.nature.com/articles/s41...
Hybrid neural–cognitive models reveal how memory shapes human reward learning - Nature Human Behaviour
Using artificial neural networks applied to human data, Eckstein et al. show that good models of reinforcement learning require memory components that track representations of the past.
www.nature.com
February 10, 2026 at 8:29 AM
Reposted by Andrew Lampinen
The visual world is composed of objects, and those objects are composed of features. But do VLMs exploit this compositional structure when processing multi-object scenes? In our 🆒🆕 #ICLR2026 paper, we find they do – via emergent symbolic mechanisms for visual binding. 🧵👇
February 5, 2026 at 8:55 PM
Interesting results by @eghbal-hosseini.bsky.social on how language models representation geometry evolves during different types of in-context learning!
How do diverse context structures reshape representations in LLMs?
In our new work, we explore this via representational straightening. We found LLMs are like a Swiss Army knife: they select different computational mechanisms reflected in different representational structures. 1/
February 4, 2026 at 2:58 AM
Was a pleasure to discuss the cognitive basis of reasoning at an @ivado.bsky.social workshop with legends like @alisongopnik.bsky.social @lauraruis.bsky.social @taylorwwebb.bsky.social and Andrew Granville!
January 31, 2026 at 8:27 PM
New paper! In arxiv.org/abs/2601.20834 we study how language models representations of things like factuality evolve over a conversation. We find that in edge case conversations, e.g. about model consciousness or delusional content, model representations can change dramatically! 1/
January 29, 2026 at 1:54 PM
Should you go to academia or industry for research in AI or cognitive science? It's the most common question I get asked by PhD students, and I've written up some of my thoughts on the answer, as an epilogue to my research-focused series on these fields: infinitefaculty.substack.com/p/on-researc...
On research careers in academia and industry
The epilogue to a series on Cognitive Science and AI
infinitefaculty.substack.com
January 23, 2026 at 3:13 PM
Reposted by Andrew Lampinen
Our experiences have countless details, and it can be hard to know which matter.

How can we behave effectively in the future when, right now, we don't know what we'll need?

Out today in @nathumbehav.nature.com , @marcelomattar.bsky.social and I find that people solve this by using episodic memory.
Episodic memory facilitates flexible decision-making via access to detailed events - Nature Human Behaviour
Nicholas and Mattar found that people use episodic memory to make decisions when it is unclear what will be needed in the future. These findings reveal how the rich representational capacity of episod...
www.nature.com
January 23, 2026 at 1:18 PM
Reposted by Andrew Lampinen
I'm very excited about this paper with @yngwienielsen.bsky.social just out in @nathumbehav.nature.com in which we provide evidence for the mental representation of non-hierarchical linguistic structure in language use.
🧵 1/4
Read the paper here: rdcu.be/eZ26u
Evidence for the representation of non-hierarchical structures in language
Nature Human Behaviour - Language is often thought to be represented through hierarchically structured units. Nielsen and Christiansen find that non-hierarchical structures are present across...
rdcu.be
January 21, 2026 at 10:07 PM
Reposted by Andrew Lampinen
arxiv.org/abs/2601.11432
I want to share an astonishing result. LLMs can "translate" Jabberwocky' texts like 'He dwushed a ghanc zawk” & even and even 'In the BLANK BLANK, BLANK BLANK has BLANK over any BLANK BLANK’s BLANK' This has profound consequence for thinking about.. 1/2
arxiv.org
January 19, 2026 at 3:27 AM
When are impossibility proofs misleading? In infinitefaculty.substack.com/p/be-wary-of..., I discuss a common issue I see: proofs that are logically valid, but where the underlying assumptions are unjustified. I discuss ‘proofs’ that cognition cannot be tractably learned, and that LMs are 1/
Be wary of assumptions in impossibility arguments
A proof is only as good as its assumptions
infinitefaculty.substack.com
January 13, 2026 at 3:33 PM
What can cognitive science learn from AI? In infinitefaculty.substack.com/p/what-cogni... I outline how AI has found that scale and richness of learning experiences fundamentally change learning & generalization — and how I believe we should rethink cognitive experiments & theories in response.
What cognitive science can learn from AI
#3 in a series on cognitive science and AI
infinitefaculty.substack.com
January 5, 2026 at 6:10 PM
New post! Last week I shared why I thought cognitive (neuro)science hasn’t contributed as much as one might hope to the design of AI systems; this week I'm sharing my thoughts on how methods and principles from these fields *have* been useful in my work. infinitefaculty.substack.com/p/how-cognit...
How cognitive science can contribute to AI: methods for understanding
#2 in a series on cognitive science and AI
infinitefaculty.substack.com
December 23, 2025 at 5:10 PM
Why isn’t modern AI built around principles from cognitive science or neuroscience? Starting a substack (infinitefaculty.substack.com/p/why-isnt-m...) by writing down my thoughts on that question: as part of a first series of posts giving my current thoughts on the relation between these fields. 1/3
Why isn’t modern AI built around principles from cognitive science?
First post in a series on cognitive science and AI
infinitefaculty.substack.com
December 16, 2025 at 3:40 PM
Reposted by Andrew Lampinen
I'm more and more convinced that low-dimensional manifolds in the brain are just an artifact of the experimental designs and analyses we use...

🧠📈 🧪
Dimensionality reduction may be the wrong approach to understanding neural representations. Our new paper shows that across human visual cortex, dimensionality is unbounded and scales with dataset size—we show this across nearly four orders of magnitude. journals.plos.org/ploscompbiol...
December 11, 2025 at 8:19 PM
Reposted by Andrew Lampinen
Dimensionality reduction may be the wrong approach to understanding neural representations. Our new paper shows that across human visual cortex, dimensionality is unbounded and scales with dataset size—we show this across nearly four orders of magnitude. journals.plos.org/ploscompbiol...
December 11, 2025 at 3:32 PM
Heading to NeurIPS this week! Let me know if you want to chat about science of what models learn, cognitive science, interpretability, what models learn in context vs. from their training data, etc. A few things I'm involved in:
November 30, 2025 at 5:56 PM
Amazing opportunity to work with a brilliant researcher and all-around wonderful person — definitely apply if you're interested in memory & perception at the intersection of AI & cognitive (neuro)science!
starting fall 2026 i'll be an assistant professor at @upenn.edu 🥳

my lab will develop scalable models/theories of human behavior, focused on memory and perception

currently recruiting PhD students in psychology, neuroscience, & computer science!

reach out if you're interested 😊
November 25, 2025 at 9:47 PM
Very important point! We've made arguments from a computational perspective that low-variance features can be computationally relevant (bsky.app/profile/lamp...), but it's much cooler to see it demonstrated on a model of real neural dynamics
“Our findings challenge the conventional focus on low-dimensional coding subspaces as a sufficient framework for understanding neural computations, demonstrating that dimensions previously considered task-irrelevant and accounting for little variance can have a critical role in driving behavior.”
Neural dynamics outside task-coding dimensions drive decision trajectories through transient amplification
Most behaviors involve neural dynamics in high-dimensional activity spaces. A common approach is to extract dimensions that capture task-related variability, such as those separating stimuli or choice...
www.biorxiv.org
November 23, 2025 at 5:05 PM
Great work by Andrea & co, now out with more datasets, models, and analyses!
November 20, 2025 at 1:45 PM
I was honored to speak at Princeton’s symposium on The Physics of John Hopfield: Learning & Intelligence this week. I sketched out a perspective that ties together some of our recent work on ICL vs. parametric learning, and some possible links to hippocampal replay: 1/
November 15, 2025 at 8:56 PM
Reposted by Andrew Lampinen
Can't tell you how great it is to finally be able to release and talk about this work, SIMA 2, the next step toward embodied intelligence in rich, interactive 3D worlds!

deepmind.google/sima
SIMA 2: A Gemini-Powered AI Agent for 3D Virtual Worlds
Introducing SIMA 2, the next milestone in our research creating general and helpful AI agents. By integrating the advanced capabilities of our Gemini models, SIMA is evolving from an instruction-foll…
deepmind.google
November 13, 2025 at 3:20 PM
Reposted by Andrew Lampinen
Today in Nature Machine Intelligence, Kazuki Irie & I discuss 4 classic challenges for neural nets — systematic generalization, catastrophic forgetting, few-shot learning, & reasoning. We argue there is a unifying fix: the right incentives & practice. rdcu.be/eLRmg
October 20, 2025 at 1:18 PM
What aspects of human knowledge do vision models like CLIP fail to capture, and how can we improve them? We suggest models miss key global organization; aligning them makes them more robust. Check out LukasMuttenthaler's work, finally out (in Nature!?) www.nature.com/articles/s41... + our blog! 1/3
Aligning machine and human visual representations across abstraction levels - Nature
Aligning foundation models with human judgments enables them to more accurately approximate human behaviour and uncertainty across various levels of visual abstraction, while additionally improving th...
www.nature.com
November 12, 2025 at 4:50 PM
Reposted by Andrew Lampinen
New work to appear @ TACL!

Language models (LMs) are remarkably good at generating novel well-formed sentences, leading to claims that they have mastered grammar.

Yet they often assign higher probability to ungrammatical strings than to grammatical strings.

How can both things be true? 🧵👇
November 10, 2025 at 10:11 PM