Taylor Webb
taylorwwebb.bsky.social
Taylor Webb
@taylorwwebb.bsky.social
Studying cognition in humans and machines https://scholar.google.com/citations?user=WCmrJoQAAAAJ&hl=en
Pinned
LLMs have shown impressive performance in some reasoning tasks, but what internal mechanisms do they use to solve these tasks? In a new preprint, we find evidence that abstract reasoning in LLMs depends on an emergent form of symbol processing arxiv.org/abs/2502.20332 (1/N)
Emergent Symbolic Mechanisms Support Abstract Reasoning in Large Language Models
Many recent studies have found evidence for emergent reasoning capabilities in large language models, but debate persists concerning the robustness of these capabilities, and the extent to which they ...
arxiv.org
Reposted by Taylor Webb
Our paper “The cost of thinking is similar between large reasoning models and humans” is now out in PNAS! 🤖🧠
w/ @fepdelia.bsky.social, @hopekean.bsky.social, @lampinen.bsky.social, and @evfedorenko.bsky.social
Link: www.pnas.org/doi/10.1073/... (1/6)
PNAS
Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...
www.pnas.org
November 19, 2025 at 8:14 PM
Reposted by Taylor Webb
LLMs are trained to compress data by mapping sequences to high-dim representations!
How does the complexity of this mapping change across LLM training? How does it relate to the model’s capabilities? 🤔
Announcing our #NeurIPS2025 📄 that dives into this.

🧵below
#AIResearch #MachineLearning #LLM
October 31, 2025 at 4:19 PM
Reposted by Taylor Webb
this paper takes me by surprise a bit. of coz, we all know Ned's been thinking along these lines for decades: philpapers.org/rec/BLOBVC
but is he really gonna seriously publish a new paper on this now, given all the AI hype & debates re: how unscientific some popular views on C are these days?

1/
October 9, 2025 at 2:40 AM
Very excited to share that our work (together with co-first author Shanka Subhra Mondal and @neuroai.bsky.social ) on a brain-inspired architecture for planning with LLMs is now out in Nature Communications! www.nature.com/articles/s41... (thread below)
A brain-inspired agentic architecture to improve planning with LLMs - Nature Communications
Multi-step planning is a challenge for LLMs. Here, the authors introduce a brain-inspired Modular Agentic Planner that decomposes planning into specialized LLM modules, improving performance across tasks and highlighting the value of cognitive neuroscience for LLM design.
www.nature.com
October 6, 2025 at 9:51 PM
Very nice commentary arguing that binding is still a problem, for both biological and artificial neural networks www.sciencedirect.com/science/arti...
Feature binding in biological and artificial vision
www.sciencedirect.com
September 7, 2025 at 4:07 PM
Reposted by Taylor Webb
Very happy to announce that our paper “Sensory Horizons and the Functions of Conscious Vision” is now out as a target article in BBS!! @smfleming.bsky.social and I present a new theory of the evolution and functions of visual consciousness. Article here: doi.org/10.1017/S014.... A (long) thread 🧵
Sensory Horizons and the Functions of Conscious Vision | Behavioral and Brain Sciences | Cambridge Core
Sensory Horizons and the Functions of Conscious Vision
doi.org
April 21, 2025 at 3:27 PM
Reposted by Taylor Webb
🔍 Large language models, similar to those behind ChatGPT, can predict how the human brain responds to visual stimuli

New study by @adriendoerig.bsky.social @freieuniversitaet.bsky.social with colleagues from Osnabrück, Minnesota and @umontreal-en.bsky.social

Read the whole story 👉 bit.ly/3JXlYmO
September 2, 2025 at 7:01 AM
Reposted by Taylor Webb
Very excited to release a new blog post that formalizes what it means for data to be compositional, and shows how compositionality can exist at multiple scales. Early days, but I think there may be significant implications for AI. Check it out! ericelmoznino.github.io/blog/2025/08...
Defining and quantifying compositional structure
What is compositionality? For those of us working in AI or cognitive neuroscience this question can appear easy at first, but becomes increasingly perplexing the more we think about it. We aren’t shor...
ericelmoznino.github.io
August 18, 2025 at 8:46 PM
Reposted by Taylor Webb
Our new preprint explores how advances in AI change how we think about the role of symbols in human cognition. As neural networks show capabilities once used to argue for symbolic processes, we need to revisit how we can identify the level of analysis at which symbols are useful.
🤖 🧠 NEW PAPER ON COGSCI & AI 🧠 🤖

Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning

So what role should symbols play in theories of the mind? For our answer...read on!

Paper: arxiv.org/abs/2508.05776

1/n
August 15, 2025 at 6:59 PM
New position paper! We argue that symbolic and neural network models are not in opposition to each other, but occupy different levels of analysis, and also outline a new research agenda for better understanding the relationship between them. Please check out the paper / thread for more details!
🤖 🧠 NEW PAPER ON COGSCI & AI 🧠 🤖

Recent neural networks capture properties long thought to require symbols: compositionality, productivity, rapid learning

So what role should symbols play in theories of the mind? For our answer...read on!

Paper: arxiv.org/abs/2508.05776

1/n
August 15, 2025 at 4:40 PM
Reposted by Taylor Webb
Can LLMs reason by analogy like humans? We investigate this question in a new paper published in the Journal of Memory and Language (link below). This was a long-running but very rewarding project. Here are a few thoughts on our methodology and main findings. 1/9
August 11, 2025 at 8:02 AM
Reposted by Taylor Webb
Is the Language of Thought == Language? A Thread 🧵
New Preprint (link: tinyurl.com/LangLOT) with @alexanderfung.bsky.social, Paris Jaggers, Jason Chen, Josh Rule, Yael Benn, @joshtenenbaum.bsky.social, ‪@spiantado.bsky.social‬, Rosemary Varley, @evfedorenko.bsky.social
1/8
Evidence from Formal Logical Reasoning Reveals that the Language of Thought is not Natural Language
Humans are endowed with a powerful capacity for both inductive and deductive logical thought: we easily form generalizations based on a few examples and draw conclusions from known premises. Humans al...
tinyurl.com
August 3, 2025 at 8:18 PM
Reposted by Taylor Webb
My first, first author paper, comparing the properties of memory-augmented large language models and human episodic memory, out in @cp-trendscognsci.bsky.social!

authors.elsevier.com/a/1lV174sIRv...

Here’s a quick 🧵(1/n)
authors.elsevier.com
July 26, 2025 at 3:05 PM
Reposted by Taylor Webb
After five years of confused staring at Greek letters, it is my absolute pleasure to finally share our (with @smfleming.bsky.social) computational model of mental imagery and reality monitoring: Perceptual Reality Monitoring as Higher-Order inference on Sensory Precision ✨
osf.io/preprints/ps...
OSF
osf.io
July 23, 2025 at 2:18 PM
Very excited for this symposium! We have an amazing lineup of speakers exploring the intersection between cog sci and mechanistic interpretability. If you’re at CogSci and interested in the ways in which mechanisms in LLMs might inform cognitive theories, please check it out!
Thrilled to announce our symposium, Cognitively Inspired Interpretability inn Large Neural Networks, at #CogSci2025 featuring @taylorwwebb.bsky.social, Ellie Pavlick, Jiahai Feng, Gustaw Opielka, ‪‪@claires012345.bsky.social‬, and Idan Blank!
July 18, 2025 at 1:58 AM
Reposted by Taylor Webb
📯 Come visit our #ICML25 Spotlight Poster and meet @taylorwwebb.bsky.social to discuss our work: "Toward an Algorithmic Evaluation and Understanding of Generative AI."

Paper: openreview.net/forum?id=eax...
Poster: icml.cc/media/Poster...
Position: We Need An Algorithmic Understanding of Generative AI
What algorithms do LLMs actually learn and use to solve problems? Studies addressing this question are sparse, as research priorities are focused on improving performance through scale, leaving a...
openreview.net
July 16, 2025 at 7:07 AM
Reposted by Taylor Webb
If you're at ICML, check our work on AlgEval, toward algorithmic understanding of generative AI. I couldn't make it in person but am excited to say @taylorwwebb.bsky.social is there presenting our spotlight paper.
P.S. If you see Taylor congratulate him on his professorship!
arxiv.org/abs/2507.07544
July 16, 2025 at 6:32 PM
Come check out our poster session at 11 tomorrow to find out how LLMs approximate symbol systems for abstract reasoning icml.cc/virtual/2025...
July 17, 2025 at 3:04 AM
Reposted by Taylor Webb
Excited to announce the first workshop on CogInterp: Interpreting Cognition in Deep Learning Models @ NeurIPS 2025! 📣

How can we interpret the algorithms and representations underlying complex behavior in deep learning models?

🌐 coginterp.github.io/neurips2025/

1/4
Home
First Workshop on Interpreting Cognition in Deep Learning Models (NeurIPS 2025)
coginterp.github.io
July 16, 2025 at 1:08 PM
Reposted by Taylor Webb
Excited to share a new preprint w/ @annaschapiro.bsky.social! Why are there gradients of plasticity and sparsity along the neocortex–hippocampus hierarchy? We show that brain-like organization of these properties emerges in ANNs that meta-learn layer-wise plasticity and sparsity. bit.ly/4kB1yg5
A gradient of complementary learning systems emerges through meta-learning
Long-term learning and memory in the primate brain rely on a series of hierarchically organized subsystems extending from early sensory neocortical areas to the hippocampus. The components differ in t...
bit.ly
July 16, 2025 at 4:15 PM
Reposted by Taylor Webb
Super excited to share this one!! Meta-learning sparsity and learning rate gives rise to brain-like gradients of complementary learning systems. So complementary learning systems emerge organically through behavior optimization, and it's not just two of them!!
Excited to share a new preprint w/ @annaschapiro.bsky.social! Why are there gradients of plasticity and sparsity along the neocortex–hippocampus hierarchy? We show that brain-like organization of these properties emerges in ANNs that meta-learn layer-wise plasticity and sparsity. bit.ly/4kB1yg5
A gradient of complementary learning systems emerges through meta-learning
Long-term learning and memory in the primate brain rely on a series of hierarchically organized subsystems extending from early sensory neocortical areas to the hippocampus. The components differ in t...
bit.ly
July 16, 2025 at 4:18 PM
Reposted by Taylor Webb
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions.

Work with @zejinlu.bsky.social @sushrutthorat.bsky.social and Radek Cichy

arxiv.org/abs/2507.03168
arxiv.org
July 8, 2025 at 1:04 PM
Reposted by Taylor Webb
🧠 Can a neural network build a spatial map from scattered episodic experiences like humans do?

We introduce the Episodic Spatial World Model (ESWM)—a model that constructs flexible internal world models from sparse, disjoint memories.

🧵👇 [1/12]
June 28, 2025 at 1:09 AM
Excited to share that our work on emergent symbolic mechanisms will be presented at ICML. Please check out the paper, where we have *significantly* expanded the results. In short, we find evidence for emergent symbol processing across several models and tasks (details below).
LLMs have shown impressive performance in some reasoning tasks, but what internal mechanisms do they use to solve these tasks? In a new preprint, we find evidence that abstract reasoning in LLMs depends on an emergent form of symbol processing arxiv.org/abs/2502.20332 (1/N)
Emergent Symbolic Mechanisms Support Abstract Reasoning in Large Language Models
Many recent studies have found evidence for emergent reasoning capabilities in large language models, but debate persists concerning the robustness of these capabilities, and the extent to which they ...
arxiv.org
June 23, 2025 at 3:46 PM
Reposted by Taylor Webb
Pleased to share our ICML Spotlight with @eberleoliver.bsky.social, Thomas McGee, Hamza Giaffar, @taylorwwebb.bsky.social.

Position: We Need An Algorithmic Understanding of Generative AI

What algorithms do LLMs actually learn and use to solve problems?🧵1/n
openreview.net/forum?id=eax...
June 20, 2025 at 3:48 PM