Guillermo Puebla
banner
guillermopuebla.bsky.social
Guillermo Puebla
@guillermopuebla.bsky.social
Cognitive scientist studying visual reasoning in humans and DNNs.
https://guillermopuebla.com
Reposted by Guillermo Puebla
Interested in doing a PhD with me and lacns.github.io? Or with any of the incredible fellows in the IMPRS School of Cognition www.maxplanckschools.org/cognition-en - apply before Dec 1st at cognition.maxplanckschools.org/en/application
Language and Computation in Neural Systems
We are an international group of scientists consisting of linguists, cognitive scientists, cognitive neuroscientists, computational neuroscientists, computational modellers, computational scientists, ...
lacns.github.io
September 3, 2025 at 3:15 PM
Reposted by Guillermo Puebla
In our forthcoming paper, John Hummel and I ask what it would mean for a neural computing architecture such as a brain to implement a symbol system, and the related question of what makes it difficult for them to do so, with an eye toward the differences between humans, animals, and ANNs.
From Basic Affordances to Symbolic Thought: A Computational Phylogenesis of Biological Intelligence
What is it about human brains that allows us to reason symbolically whereas most other animals cannot? There is evidence that dynamic binding, the ability to combine neurons into groups on the fly, is...
arxiv.org
August 22, 2025 at 6:25 PM
Reposted by Guillermo Puebla
Large Language models (LLMs) do not simulate human psychology. That's the title of our new paper, available as preprint today (1/12):

arxiv.org/abs/2508.06950
Large Language Models Do Not Simulate Human Psychology
Large Language Models (LLMs),such as ChatGPT, are increasingly used in research, ranging from simple writing assistance to complex data annotation tasks. Recently, some research has suggested that LLM...
arxiv.org
August 12, 2025 at 3:05 PM
Reposted by Guillermo Puebla
What does it mean if pure prediction fails? If you get 100% pure prediction you still don’t know how the model predicted unless you run an experiment that manipulates independent variables. A digital clock 100% predicts the time of a cuckoo clock but it works totally differently.
December 12, 2024 at 11:05 PM
Reposted by Guillermo Puebla
What's the deal with negatively accelerating search functions in relational searches? The CASPER model of visual search shows how emergent features may allow for parallel processing in searches we'd expect to be steep and linear. Visit me Tuesday afternoon in the Banyan Breezeway! #VSS2024
May 21, 2024 at 5:11 PM
Reposted by Guillermo Puebla
I've got an exciting #Visionscience and #STEMed announcement: My textbook "Practical Vision: Learning through Experimentation" is now in production at Routledge Books! The book focuses on hands-on, analog exercises to identify and discuss key mechanisms of human vision. Coming October 2024!
April 22, 2024 at 4:36 PM
Reposted by Guillermo Puebla
Clear writing is (imperfect) evidence of clear thinking. The use of LLMs for writing is IMO often inexcusable, substituting for one’s own voice the median voices of the past and deceiving one’s audience with an incorrect picture of one’s understanding (corrupting their training data, so to speak).
March 15, 2024 at 4:16 PM
Reposted by Guillermo Puebla
Building larger LLMs to get AGI is like linearly accelerating towards light speed
January 14, 2024 at 11:39 PM
Reposted by Guillermo Puebla
Excited to share @rgast.bsky.social's new PNAS paper from the lab, with Sara Solla.

We ask: does cell type heterogeneity affect what neural networks can compute? How might different brain regions leverage heterogeneity to achieve different things?

www.pnas.org/doi/10.1073/...
January 11, 2024 at 2:26 PM
Reposted by Guillermo Puebla
We then all agree in one respect: There is a lot of hype.  But the problem is with researchers who take the hype seriously?That we should focus on the strengths rather than weaknesses of DNNs? That there is little "confusion that deep neural networks (DNNs) are ‘models of the human visual system’”?🤷‍♂️
January 8, 2024 at 2:15 PM
Reposted by Guillermo Puebla
This is so cool:
www.pnas.org/doi/10.1073/...
Bacteria store a memory of swarming proficiency (measured as the time lag to start swarming on suitable media) in the form of intracellular iron levels. This memory can be passed down for 4 generations!
November 26, 2023 at 4:54 PM
Reposted by Guillermo Puebla
A lot of the neuroscience work, particularly in hippocampus, has been focused on content-addressable memory, where data and address are the same. But this might not be the right way to think about memory in the brain. Maybe we have an addressing system that is separate from stored content.
November 5, 2023 at 11:03 AM