Jay Hennig
banner
jhennig.bsky.social
Jay Hennig
@jhennig.bsky.social
Computational neuroscientist interested in how we learn, and dad to twin boys
Asst prof at Baylor College of Medicine
https://www.henniglab.org/
Reposted by Jay Hennig
📍Excited to share that our paper was selected as a Spotlight at #NeurIPS2025!

arxiv.org/pdf/2410.03972

It started from a question I kept running into:

When do RNNs trained on the same task converge/diverge in their solutions?
🧵⬇️
November 24, 2025 at 4:43 PM
Putting the figures at the end of your preprint is one thing, but separating the CAPTIONS from the figures (with both at the end of the paper) is just plain cruel
November 18, 2025 at 3:07 PM
Reposted by Jay Hennig
My paper is out!
Computational modeling of error patterns during reward-based learning show evidence that habit learning (value free!) supplements working memory in 7 human data sets.
rdcu.be/eQjLN
A habit and working memory model as an alternative account of human reward-based learning
Nature Human Behaviour - In this study, Collins proposes an alternative dual-process (working memory and habit) model of reinforcement learning in humans.
rdcu.be
November 17, 2025 at 5:18 PM
Reposted by Jay Hennig
paper🚨
When we learn a category, do we learn the structure of the world, or just where to draw the line? In a cross-species study, we show that humans, rats & mice adapt optimally to changing sensory statistics, yet rely on fundamentally different learning algorithms.
www.biorxiv.org/content/10.1...
Different learning algorithms achieve shared optimal outcomes in humans, rats, and mice
Animals must exploit environmental regularities to make adaptive decisions, yet the learning algorithms that enabels this flexibility remain unclear. A central question across neuroscience, cognitive science, and machine learning, is whether learning relies on generative or discriminative strategies. Generative learners build internal models the sensory world itself, capturing its statistical structure; discriminative learners map stimuli directly onto choices, ignoring input statistics. These strategies rely on fundamentally different internal representations and entail distinct computational trade-offs: generative learning supports flexible generalisation and transfer, whereas discriminative learning is efficient but task-specific. We compared humans, rats, and mice performing the same auditory categorisation task, where category boundaries and rewards were fixed but sensory statistics varied. All species adapted their behaviour near-optimally, consistent with a normative observer constrained by sensory and decision noise. Yet their underlying algorithms diverged: humans predominantly relied on generative representations, mice on discriminative boundary-tracking, and rats spanned both regimes. Crucially, end-point performance concealed these differences, only learning trajectories and trial-to-trial updates revealed the divergence. These results show that similar near-optimal behaviour can mask fundamentally different internal representations, establishing a comparative framework for uncovering the hidden strategies that support statistical learning. ### Competing Interest Statement The authors have declared no competing interest. Wellcome Trust, https://ror.org/029chgv08, 219880/Z/19/Z, 225438/Z/22/Z, 219627/Z/19/Z Gatsby Charitable Foundation, GAT3755 UK Research and Innovation, https://ror.org/001aqnf71, EP/Z000599/1
www.biorxiv.org
November 17, 2025 at 7:18 PM
Reposted by Jay Hennig
A remarkable journey of resilience and transformation, from the chaotic corridors of group homes to the halls of Columbia and Stanford, EMERGENCE is a coming-of-age tale where heartbreak and humor meet the scientific wonder of modern artificial intelligence.

🔗 Preorder: tinyurl.com/fzcxb5ea
November 17, 2025 at 6:08 PM
Reposted by Jay Hennig
Congrats to Ella for her new paper! She asked a really interesting question about how the brain represents uncertainty during hidden state inference, and in a lovely crossover with theoretical work, she shows that in mice, acetylcholine dynamics play a crucial role. www.biorxiv.org/content/10.1...
Acetylcholine reflects uncertainty during hidden state inference
To act adaptively, animals must infer features of the environment that cannot be observed directly, such as which option is currently rewarding, or which context they are in. These internal estimates,...
www.biorxiv.org
November 14, 2025 at 9:57 AM
Reposted by Jay Hennig
What I expected, of course, is that memory for these passages would improve systematically from 0th order to full text, but that's not quite what happened:
November 13, 2025 at 5:22 PM
Just realized that Colab Pro is free for students/teachers! Just sharing this in case I wasn't the only one...

colab.research.google.com/signup
Colab Paid Services Pricing
colab.research.google.com
November 11, 2025 at 9:24 PM
Reposted by Jay Hennig
Connectome datasets alone are generally not sufficient to predict neural activity. However, pairing connectivity information with neural recordings can produce accurate predictions of activity in unrecorded neurons

www.nature.com/articles/s41...
Prediction of neural activity in connectome-constrained recurrent networks - Nature Neuroscience
The authors show that connectome datasets alone are generally not sufficient to predict neural activity. However, pairing connectivity information with neural recordings can produce accurate predictio...
www.nature.com
November 10, 2025 at 10:12 PM
Reposted by Jay Hennig
This holiday season, give Jeff Bezos and Amazon the gift of zero dollars. 🥰
November 10, 2025 at 6:47 PM
Reposted by Jay Hennig
Our next paper on comparing dynamical systems (with special interest to artificial and biological neural networks) is out!! Joint work with @annhuang42.bsky.social , as well as @satpreetsingh.bsky.social , @leokoz8.bsky.social , Ila Fiete, and @kanakarajanphd.bsky.social : arxiv.org/pdf/2510.25943
November 10, 2025 at 4:16 PM
Reposted by Jay Hennig
Birds are both intelligent and incredibly agile, yet they are quite small. How do they achieve this with their little brains?
They have twice as many neurons per brain mass than mammals, including primates.
www.pnas.org/doi/abs/10.1...
November 7, 2025 at 12:55 PM
Reposted by Jay Hennig
A tad late (announcements coming) but very happy to share the latest developments in my previous preprint!

Previously, we show that neural representations for control of movement are largely distinct following supervised or reinforcement learning. The latter most closely matches NHP recordings.
November 6, 2025 at 2:10 AM
Reposted by Jay Hennig
How does the brain find its way in realistic environments? 🧠 Using deep RL and neural data, we show that hippocampal-like networks support navigation, learning, and generalisation in partially observable environments—mirroring real animal behaviour. Now out:
www.nature.com/articles/s41...
#neuroAI
Hippocampus supports multi-task reinforcement learning under partial observability - Nature Communications
Neural mechanisms underlying reinforcement learning in naturalistic environments are not fully understood. Here authors show that reinforcement learning (RL) agents with hippocampal-like recurrence, u...
www.nature.com
November 3, 2025 at 10:20 AM
I'm partial to Siouxsie & The Banshees's "Halloween" 🎃
youtu.be/ksg2ESuEMhw?...
October 31, 2025 at 4:53 PM
Reposted by Jay Hennig
LLMs are trained to compress data by mapping sequences to high-dim representations!
How does the complexity of this mapping change across LLM training? How does it relate to the model’s capabilities? 🤔
Announcing our #NeurIPS2025 📄 that dives into this.

🧵below
#AIResearch #MachineLearning #LLM
October 31, 2025 at 4:19 PM
Reposted by Jay Hennig
To be clear, have not been able to read the original study because the link does not work. HEre is a fun paper on fourier. www.pnas.org/doi/10.1073/...
Phantom oscillations in principal component analysis | PNAS
Principal component analysis (PCA) is a dimensionality reduction method that is known for being simple and easy to interpret. Principal components ...
www.pnas.org
October 31, 2025 at 1:14 PM
Reposted by Jay Hennig
uv makes installing and using Python *so* easy! It works on pretty much any computer and it's lightning fast. 🔭☄️ #astrocode

If you're still using conda, pyenv, or... basically any other tool, then I can *highly* recommend switching:
uv is the best thing to happen to the Python ecosystem in a decade - Blog - Dr. Emily L. Hunt
Released in 2024, uv is hands-down the best tool for managing Python installations and dependencies. Here's why.
emily.space
October 24, 2025 at 1:03 PM
Reposted by Jay Hennig
French has it as masculine but the more important thing is that it’s pronounced like ”chat, j’ai pété” which means “cat, I farted.”
October 30, 2025 at 3:15 PM
Reposted by Jay Hennig
I wrote an op-ed on the world-class STEM research ecosystem in the United States, and how this ecosystem is now under attack on multiple fronts by the current administration: newsletter.ofthebrave.org/p/im-an-awar...
I’m an award-winning mathematician. Trump just cut my funding.
The “Mozart of Math” tried to stay out of politics. Then it came for his research.
newsletter.ofthebrave.org
August 18, 2025 at 3:45 PM
Just led a journal club discussion of Mante & Sussillo, and decided that to understand most papers in systems computational neuroscience, you need to be willing to assume that vectors of neural firing rates are literally all you need to understand what the brain is doing
October 29, 2025 at 6:29 PM
Reposted by Jay Hennig
Yes!! A POMDP world model benchmark with controlled test environments. So excited to play with this
October 29, 2025 at 12:47 PM
Reposted by Jay Hennig
does everybody know about my favorite website, the embroidery tips page that forgot to close its <h3> tags
Embroidery Trouble Shooting Page
Embroidery Trouble Shooting Answers to all your questions about Embroidery problems
web.archive.org
October 25, 2024 at 3:39 PM