Dan Levenstein
banner
dlevenstein.bsky.social
Dan Levenstein
@dlevenstein.bsky.social
Neuroscientist, in theory.
Studying sleep and navigation in 🧠s and 💻s.

Wu Tsai Investigator, Assistant Professor of Neuroscience at Yale.

An emergent property of a few billion neurons, their interactions with each other and the world over ~1 century.
Pinned
Thrilled to announce I'll be starting my own neuro-theory lab, as an Assistant Professor at @yaleneuro.bsky.social @wutsaiyale.bsky.social this Fall!

My group will study offline learning in the sleeping brain: how neural activity self-organizes during sleep and the computations it performs. 🧵
Reposted by Dan Levenstein
Can’t forget this excellent follow-up post too
January 2, 2026 at 10:43 PM
Reposted by Dan Levenstein
5️⃣ The thalamus is for __________ learning?

Just to keep this going 😁
4️⃣ The hippocampus is for _________ learning?
Way back in 1999, Kenji Doya sketched a big picture theory of the brain:

1️⃣The cerebellum is specialized for supervised learning
2️⃣The basal ganglia are for reinforcement learning
3️⃣The cerebral cortex is for unsupervised learning

How does this hold up in 2026? www.sciencedirect.com/science/arti...
January 2, 2026 at 6:46 AM
4️⃣ The hippocampus is for _________ learning?
Way back in 1999, Kenji Doya sketched a big picture theory of the brain:

1️⃣The cerebellum is specialized for supervised learning
2️⃣The basal ganglia are for reinforcement learning
3️⃣The cerebral cortex is for unsupervised learning

How does this hold up in 2026? www.sciencedirect.com/science/arti...
January 1, 2026 at 6:37 PM
Reposted by Dan Levenstein
Way back in 1999, Kenji Doya sketched a big picture theory of the brain:

1️⃣The cerebellum is specialized for supervised learning
2️⃣The basal ganglia are for reinforcement learning
3️⃣The cerebral cortex is for unsupervised learning

How does this hold up in 2026? www.sciencedirect.com/science/arti...
January 1, 2026 at 3:36 PM
Reposted by Dan Levenstein
Context: For You is a custom feed based on your likes.

It finds people who liked the same posts as you, and shows you what else they've liked recently.
December 21, 2025 at 2:53 AM
Oh.
Saw Bluesky described as where elder millennials go to retire from the internet, and immediately felt the peace that passes all understanding wash over me. None of have to struggle any longer. We completed our time.
December 29, 2025 at 2:41 AM
Reposted by Dan Levenstein
Where is the story in a book?
Where are thoughts in the brain? Are they in the brain?
December 21, 2025 at 10:32 AM
Reposted by Dan Levenstein
The idea of inner and outer loop is fascinating, but this model might have a competition (based on parsimony probably) from "HRRL" described here:

tinyurl.com/msptrdtj

but your model seems to answer the question in the linked paper:

"we have to admit that we do not know how goals are computed"
Where Does Value Come From?
The computational framework of reinforcement learning (RL) has allowed us to both understand biological brains and build successful artificial agents. However, in this opinion, we highlight open chall...
tinyurl.com
December 19, 2025 at 6:55 AM
Reposted by Dan Levenstein
Goal selection through the lens of subjective functions:
arxiv.org/abs/2512.15948
I welcome any feedback on these preliminary ideas.
Subjective functions
Where do objective functions come from? How do we select what goals to pursue? Human intelligence is adept at synthesizing new objective functions on the fly. How does this work, and can we endow arti...
arxiv.org
December 19, 2025 at 3:15 AM
Reposted by Dan Levenstein
WTI is hiring a Research Software Engineer.

Join our interdisciplinary community at Yale to collaborate with researchers on brain + behavioral data. Learn more at the link below + share this opportunity with your network!

🔗 wti.yale.edu/opportunities

#KnowTogether #ScienceAtYale
December 15, 2025 at 4:04 PM
Reposted by Dan Levenstein
Is there any evidence for anti-Hebbian plasticity during sleep? This is an old hypothesis that continues to get cited (and to appear in some computational models), yet I haven't been able to find any study that shows this. There's ample evidence of synaptic downscaling, but that's different.
December 15, 2025 at 10:38 AM
This seems to fly in the face of “thou shall not assume causation from correlation” and “thou shall not assume function from form”

From what I can tell, the argument is that an adaptive system (evo/bio/neuro) will learn to use any knob available, so if we see a knob we should assume it’s used?
I would add "where you see A, assume it is functional".
December 7, 2025 at 3:21 PM
Some really good tips here. Wish I had learned number 3 earlier, or ever for that matter 🥲

www.reddit.com/r/GradSchool...
From the GradSchool community on Reddit
Explore this post and more from the GradSchool community
www.reddit.com
December 7, 2025 at 2:06 AM
In which @kordinglab.bsky.social argues LLMs are more like an electric motor than a drill, and starts to build a drill for scientific research.

open.substack.com/pub/kording/...
The Electric Motor and the Drill - we use AI in the wrong way
Power tools are better than general purpose tools for most applications, my science planning app planyourscience.com is a result of this philosophy
open.substack.com
December 5, 2025 at 3:54 PM
Reposted by Dan Levenstein
0/10 Thanks for the interest in our preprint. Some takes say it negates or fully supports the “manifold hypothesis”, neither quite right. Our results show that if you only focus on the manifold capturing most of task-related variance, you could miss important dynamics that actually drive behavior.
“Our findings challenge the conventional focus on low-dimensional coding subspaces as a sufficient framework for understanding neural computations, demonstrating that dimensions previously considered task-irrelevant and accounting for little variance can have a critical role in driving behavior.”
Neural dynamics outside task-coding dimensions drive decision trajectories through transient amplification
Most behaviors involve neural dynamics in high-dimensional activity spaces. A common approach is to extract dimensions that capture task-related variability, such as those separating stimuli or choice...
www.biorxiv.org
December 2, 2025 at 7:48 AM
For a while it felt like bsky was recapturing the prof involvement of science twitter but not the students (the former PhDs/postdocs of science twitter are the new-profs of bsky 😅).

Lately it feels like a lot more students getting on the bus 🧪🦋🚌 🙌🙌🙌

#neuroskyence
November 30, 2025 at 12:32 AM
TFW there’s a whole new podcast about Fela Kuti for the ride home 🤩🙌

open.spotify.com/episode/203S...
Fela Kuti: Enter the Shrine
open.spotify.com
November 30, 2025 at 12:24 AM
Sometimes I think the best we can hope for is productively wrong 😉
pre-19th c. stellar parallax experiments are one of my fav examples of productive wrongness. Very well designed experiments that could not give the correct result because of ancillary hypotheses about how far away stars are.
November 29, 2025 at 11:43 PM
Where do you think this is *not* the case? (i.e. what parts of theoretical neuro today are truly new?)

Where do you think this will not be the case in 20 years? (i.e if research progresses in a direction you think it should, what will be the new stuff we’re not talking about today?)
I think it's a problem for neuroscience, particularly theoretical neuroscience. I was watching a talk with someone the other day and said to them "I feel like I could have been listening to this same talk when I started in neuro almost 20 years ago". Turns out they were thinking the same thing.
November 29, 2025 at 11:41 PM
Reposted by Dan Levenstein
around 12 years ago I had the good fortune to meet Massimo Scanziani and told him about my ongoing postdoc project that I was preparing a manuscript on, his first question was "what is the central message of your paper?"

That one prompt changed the way I write papers forever
November 27, 2025 at 9:06 PM
Reposted by Dan Levenstein
Oh man. Science Neural Circuits would be my new favorite journal.
November 26, 2025 at 8:02 PM
There’s an interesting parallel here to neural network interpretability…

Understanding the recipe is not the same as knowing how the cake tastes at inference.
If you make a system that can self-replicate and is capable of mutation in a way that can respond to selective influences, that doesn't mean it's alive. Who agrees with that?
November 26, 2025 at 2:32 PM
Reposted by Dan Levenstein
I am really proud that eLife have published this paper. It is a very nice paper, but you need to also read the reviews to understand why! 1/n
"The inevitability and superfluousness of cell types in spatial cognition". Intuitive cell types are found in random artificial networks using the same selection criteria neuroscientists use with actual data. elifesciences.org/reviewed-pre... 1/2
elifesciences.org
November 25, 2025 at 8:34 PM
Reposted by Dan Levenstein
Y’all are reading this paper in the wrong way.

We love to trash dominant hypothesis, but we need to look for evidence against the manifold hypothesis elsewhere:

This elegant work doesn't show neural dynamics are high D, nor that we should stop using PCA

It’s quite the opposite!

(thread)
“Our findings challenge the conventional focus on low-dimensional coding subspaces as a sufficient framework for understanding neural computations, demonstrating that dimensions previously considered task-irrelevant and accounting for little variance can have a critical role in driving behavior.”
Neural dynamics outside task-coding dimensions drive decision trajectories through transient amplification
Most behaviors involve neural dynamics in high-dimensional activity spaces. A common approach is to extract dimensions that capture task-related variability, such as those separating stimuli or choice...
www.biorxiv.org
November 25, 2025 at 4:16 PM