Dan Levenstein
banner
dlevenstein.bsky.social
Dan Levenstein
@dlevenstein.bsky.social
Neuroscientist, in theory. Studying sleep and navigation in 🧠s and 💻s.

Assistant Professor at Yale Neuroscience, Wu Tsai Institute.

An emergent property of a few billion neurons, their interactions with each other and the world over ~1 century.
Pinned
Thrilled to announce I'll be starting my own neuro-theory lab, as an Assistant Professor at @yaleneuro.bsky.social @wutsaiyale.bsky.social this Fall!

My group will study offline learning in the sleeping brain: how neural activity self-organizes during sleep and the computations it performs. 🧵
Reposted by Dan Levenstein
It’s possible to see social media, and now AI, as the new radio — the new information technology which will ruin democracies around the globe if we don’t find a way to prise off billionaires’ control of those new info channels.
November 9, 2025 at 2:56 PM
A loss of trust in information is just as bad as a loss of trustworthy information.
I spotted this on Mastodon and I find it horrible, not least for the speed with which this has happened.
November 9, 2025 at 2:40 PM
Starting to wonder if any of us will be at sfn or if we’re all going to be hanging out in airports for the week…
Really disappointing that our #NIH colleagues will NOT be at @sfn.org this year, will NOT be discussing science, will NOT be advising us on grants, will NOT be sharing results or advancing research. 🤐
Sigh. NIH normally sends several hundred scientists to the SFN annual meeting to learn, exchange info, come up with new ideas, and advance science. (The exchange of ideas is the very core of the scientific enterprise.)

This year, no one from NIH will attend due to the gov't implosion.
November 8, 2025 at 12:10 AM
Reposted by Dan Levenstein
I'm not saying we don't have "systems that rival human intelligence in key tasks" (though "key" is doing some heavy lifting). I'm saying that if you're going to make this your definition of AGI, you've been taking the piss all along.
November 6, 2025 at 9:16 PM
Not necessarily neuroscience, but I once heard introducing+defining the MDP formalism is the “RL handshake”
What other seemingly obligatory phrases do you notice in neuro papers?
November 4, 2025 at 9:17 PM
My favorite part of pragmatism is when it’s like “maybe instead of worrying about shit that doesn’t matter we should worry about shit that does.”

“Oh and btw we’ll learn a lot more about the shit that doesn’t in the process anyway.” 🫣
[2/9] We argue that instead of getting stuck on metaphysical debates (is AI conscious?), we should treat personhood as a flexible bundle of obligations (rights & responsibilities) that societies confer.
November 3, 2025 at 12:54 PM
Lyrics are just a vehicle for syllables.
November 1, 2025 at 9:06 PM
Reposted by Dan Levenstein
[2/9] We argue that instead of getting stuck on metaphysical debates (is AI conscious?), we should treat personhood as a flexible bundle of obligations (rights & responsibilities) that societies confer.
October 31, 2025 at 12:33 PM
Reposted by Dan Levenstein
New on the Archive:

Stump, David J. (2025) Lessons from Pragmatism for Philosophers of Science: Nine Teachings and a Cautionary Tale. [Preprint]

https://philsci-archive.pitt.edu/27072/
November 1, 2025 at 2:11 PM
Really hoping bifurcations are the new manifolds. What a time to be alive 🥲
October 29, 2025 at 1:11 AM
Reposted by Dan Levenstein
"Because science rejects claims to truth based on authority and depends on the criticism of established ideas, it is the enemy of autocracy. Because scientific knowledge is tentative and provisional, it is the enemy of dogma. "
October 25, 2025 at 9:41 PM
Reposted by Dan Levenstein
George Box famously said "all models are wrong, some are useful", but what he forgot to add was that usefulness doesn't just depend on the model.

A model is useful *only with respect to a given target problem*
October 24, 2025 at 8:13 AM
TFW you name your company “Palentir”
Construction of the new ball room is coming along nicely at the White House.
October 24, 2025 at 7:27 PM
Reposted by Dan Levenstein
Come do a postdoc at the Wu Tsai Institute!

WTI fellows have freedom to work with anyone at the institute, and preference is given to applicants who want to work on interdisciplinary projects with multiple faculty mentors.

If you’re interested to work with me, please reach out!
📣 Calling experimental, computational, or theoretical researchers!

WTI's Postdoc Fellowships application is now open, offering a competitive salary, structured mentorship, world-class facilities + more: wti.yale.edu/initiatives/...

Apply by November 10: apply.interfolio.com/174525

#KnowTogether
October 8, 2025 at 12:50 PM
Great interview touching on affordances and the utility of weasel words we all use and no one agrees what they mean (affordances, representation, computation, et al).
What is plant nutation?
What is a motif in science?
What lessons does ecological psychology have for neuroscience?
How does Vicente @diovicen.bsky.social enjoy the band Judas Priest yet still do good philosophy and science?

Here are the answers:
braininspired.co/podcast/223/
October 23, 2025 at 1:00 PM
The best tweets are always buried in the replies.
Can you elaborate on how in-context learning can be a latent structural quality of language? I’m curious.
October 20, 2025 at 12:27 PM
Is this an allegory for climate change rn or what?
October 16, 2025 at 6:35 PM
Bees aside, there are a bunch of gems in here about modeling…

1) “The model is a useful and indeed unescapable tool of thought - it enables us to think about the unfamiliar in terms of the familiar. There are, however, dangers in its use: it is the function of criticism to disclose these dangers.”
🐝🐝🐝🐝🐝🐝🐝🐝🐝🐝🐝
🐝Your Brain Is Like A Computer🐝
🐝🐝🐝🐝🐝🐝🐝🐝🐝🐝🐝
October 16, 2025 at 12:25 PM
Reposted by Dan Levenstein
Science is a question-answering activity. Whenever someone tries to pose a scientific question about art, it always turns out to really be a question about something else (perception, emotion, etc). This is why many attempts to bring art and science together fall flat.
October 15, 2025 at 9:27 AM
Reposted by Dan Levenstein
It's possible to get first order approximate understanding of RNNs performing relatively complex tasks.

[1906.10720] Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics share.google/ElT686dgAUIk...

But some tasks are harder than others 🤷‍♂️
Reverse engineering recurrent networks for sentiment classification reveals line attractor dynamics
Recurrent neural networks (RNNs) are a widely used tool for modeling sequential data, yet they are often treated as inscrutable black boxes. Given a trained recurrent network, we would like to reverse...
share.google
October 14, 2025 at 3:35 PM
Reposted by Dan Levenstein
Yeah, I agree. There seems to be a pattern across various "adversarial" papers where they feel the need to take down the orthodoxy. I think it's more often "yes and" rather than "no but".
October 14, 2025 at 1:08 PM
Reposted by Dan Levenstein
My cartoon for this week’s @newscientist.com
October 12, 2025 at 9:25 AM
Reposted by Dan Levenstein
We apply our model to survey the spiking irregularity across cortical areas and find that Poisson irregularity is a rare exception, not a rule. Our results show the need to include non-Poisson spiking in inferring neural dynamics from single trials.
October 12, 2025 at 12:42 AM
Always find it an interesting question which ingredients are necessary for intelligence to emerge from scratch vs which are necessary if you only want to bootstrap from an existing intelligent system…
Moreover, AGI might also require social interactions to become a reality, where cultural evolution (and extended mind) play a major role. It is not by chance that brain and culture evolved together, allowing complex minds to emerge @anilseth.bsky.social @mitibennett.bsky.social
October 11, 2025 at 2:54 PM
Reposted by Dan Levenstein
here the trick is to unpack the word *understanding*

it’s not about what each neuron does, but the rules shaping the behaviour of biological systems!

"rules for development and learning in brains may be far easier to understand than their resulting properties"

👇👌
arxiv.org/abs/1907.06374
October 11, 2025 at 9:01 AM