Hsuan-Pei Huang
pikapei.bsky.social
Hsuan-Pei Huang
@pikapei.bsky.social
PhD student in Taiwan. He/him.
Reposted by Hsuan-Pei Huang
Konrad is doing an amazing take-down here, and at least on this point I fully agree. The "brain is analog" argument is a terrible strawman that holds back neuro.

Analog dynamics may help, but the theoretical scaling of neurons mostly come from event-driven parallelism.

arxiv.org/abs/2507.17886
November 19, 2025 at 4:27 PM
Reposted by Hsuan-Pei Huang
Catch me at the DANDI/NWB booth (3831) today at 1 pm for a live demo of this online textbook I developed with a former student of mine!

#SfN25 #SfN2025 @sfn.org

(Or check it out for yourself 👉 nwb4edu.github.io )
November 16, 2025 at 4:57 PM
Reposted by Hsuan-Pei Huang
I was honored to speak at Princeton’s symposium on The Physics of John Hopfield: Learning & Intelligence this week. I sketched out a perspective that ties together some of our recent work on ICL vs. parametric learning, and some possible links to hippocampal replay: 1/
November 15, 2025 at 8:56 PM
Reposted by Hsuan-Pei Huang
𝗜𝘀 𝘁𝗵𝗲 𝗯𝗿𝗮𝗶𝗻 𝗮 𝗰𝗼𝗺𝗽𝗹𝗲𝘅 𝗻𝗲𝘁𝘄𝗼𝗿𝗸?
In a sense yes, but does network science help us understand the brain as a complex system? Intriguing paper.
If anything the paper has 800+ refs!
#neuroskyence #complexsystems
doi.org/10.1016/j.pl...
November 10, 2025 at 6:14 PM
Reposted by Hsuan-Pei Huang
Very cool new research from the Mellor lab continuing the BTSP story: CA1 OL-M neurons decrease their activity in novel environments, ungating EC inputs and enabling formation of new place fields.

www.nature.com/articles/s41...
Hippocampal OLM interneurons regulate CA1 place cell plasticity and remapping - Nature Communications
Stability and flexibility are important, if antagonistic, features of memory. Here the authors show that a class of inhibitory neurons regulate plasticity and therefore the stability of memory represe...
www.nature.com
November 11, 2025 at 9:00 PM
Reposted by Hsuan-Pei Huang
Psst - neuromorphic folks. Did you know that you can solve the SHD dataset with 90% accuracy using only 22 kb of parameter memory by quantising weights and delays? Check out our preprint with @pengfei-sun.bsky.social and @danakarca.bsky.social, or read the TLDR below. 👇🤖🧠🧪 arxiv.org/abs/2510.27434
Exploiting heterogeneous delays for efficient computation in low-bit neural networks
Neural networks rely on learning synaptic weights. However, this overlooks other neural parameters that can also be learned and may be utilized by the brain. One such parameter is the delay: the brain...
arxiv.org
November 13, 2025 at 5:40 PM
Reposted by Hsuan-Pei Huang
example Faisal et al paper on spike jitter due to ion channel noise (although little explicit energy budgeting) journals.plos.org/ploscompbiol...

Our paper lead by James Malkin on energetics of synaptic precision: elifesciences.org/articles/92595 (contains some good refs to other papers too)
Stochastic Simulations on the Reliability of Action Potential Propagation in Thin Axons
Author SummaryNeurons in cerebral cortex achieve wiring densities of 4 km per mm3 by using unmyelinated axons of 0.3 μm average diameter as wires. Many axons (e.g., pain fibers) are thinner. Although,...
journals.plos.org
November 14, 2025 at 2:38 PM
Reposted by Hsuan-Pei Huang
Relative phase of membrane potential theta oscillations between individual hippocampal neurons code space https://www.biorxiv.org/content/10.1101/2025.11.14.688496v1
November 15, 2025 at 8:15 AM
Reposted by Hsuan-Pei Huang
new paper with @robertchisciure.bsky.social

link.springer.com/article/10.1...

"Cognition all the way down 2.0: neuroscience beyond neurons in the diverse intelligence era"

🧪
Cognition all the way down 2.0: neuroscience beyond neurons in the diverse intelligence era - Synthese
This paper formalizes biological intelligence as search efficiency in multi-scale problem spaces, aiming to resolve epistemic deadlocks in the basal “cognition wars” unfolding in the Diverse Intellige...
link.springer.com
November 7, 2025 at 12:31 AM
Reposted by Hsuan-Pei Huang
Interesting emerging trend showing how training with RL leads to different kinds of representations, frequently ones that better match biology. Ive been trying to keep up with this lit after finding something similar in my postdoc (arxiv.org/abs/2112.02027). Let me know if you know of more examples!
A tad late (announcements coming) but very happy to share the latest developments in my previous preprint!

Previously, we show that neural representations for control of movement are largely distinct following supervised or reinforcement learning. The latter most closely matches NHP recordings.
Here’s our latest work at @glajoie.bsky.social and @mattperich.bsky.social ‘s labs! Excited to see this out.

We used a combination of neural recordings & modelling to show that RL yields neural dynamics closer to biology, with useful continual learning properties.

www.biorxiv.org/content/10.1...
November 7, 2025 at 3:25 AM
Reposted by Hsuan-Pei Huang
Happy 158th Birthday, Marie Skłodowska Curie! She was the first woman to receive a Nobel Prize and the first person to receive the honor twice.

In 2017, #ScienceBooks toured the dynamics that established her as "the most iconic of all female scientists." https://scim.ag/47tkKYA
The making and remaking of Marie Curie
The famous physicist's legacy looms large 150 years after her birth
www.science.org
November 7, 2025 at 2:47 PM
Reposted by Hsuan-Pei Huang
more orientated towards postdocs & PIs, but some suggestions in this article for starting a academic website: www.nature.com/articles/s41...
How to design your academic website - Nature Human Behaviour
An academic website serves as both a public-facing window on the world wide web and an important internal laboratory resource. In this ‘How to’ piece, I outline how to build your academic website, inc...
www.nature.com
November 7, 2025 at 3:39 PM
Reposted by Hsuan-Pei Huang
Kazuki Irie has a forthcoming paper in NeurIPS that studies the following idea:
Linear attention has cheap, unbounded memory but low precision, whereas softmax attention has expensive, bounded memory but high precision. These can be combined to build better transformers.
arxiv.org/abs/2506.00744
Blending Complementary Memory Systems in Hybrid Quadratic-Linear Transformers
We develop hybrid memory architectures for general-purpose sequence processing neural networks, that combine key-value memory using softmax attention (KV-memory) with fast weight memory through dynami...
arxiv.org
November 4, 2025 at 10:33 AM
Reposted by Hsuan-Pei Huang
A semantotopic map in human hippocampus https://www.biorxiv.org/content/10.1101/2025.10.31.685959v1
November 2, 2025 at 8:15 AM
Reposted by Hsuan-Pei Huang
A short talk on the main architecture components of LLMs this year + a look beyond the transformer architecture: www.youtube.com/watch?v=lONy...
October 27, 2025 at 3:45 PM
Reposted by Hsuan-Pei Huang
Still beta-testing the app that helps people plan science, make it super clear, design their todos, and write a first version of the paper. Feeling overwhelmed by the countless aspects of doing science? Try this (for free): planyourscience.com
Scientific Paper Planner - AI-Powered Research Planning
Structure your scientific research with AI-powered guidance. From hypothesis to methodology, plan your research paper with intelligent mentoring.
planyourscience.com
October 29, 2025 at 10:06 PM
Reposted by Hsuan-Pei Huang
People emphasizing circles in dimensionality reduced trajectories are emphasizing the wrong thing imho. In a spatiotemporally low-pass world, dimensionality reduction literally reveals the fourier bases.
October 31, 2025 at 1:13 PM
Reposted by Hsuan-Pei Huang
When does new learning interfere with existing knowledge in people and ANNs? Great to have this out today in @nathumbehav.nature.com

Work with @summerfieldlab.bsky.social, @tsonj.bsky.social, Lukas Braun and Jan Grohn
www.nature.com/articles/s41...
October 31, 2025 at 2:47 PM
Reposted by Hsuan-Pei Huang
Revised version of our #NeurIPS2025 paper with full code base in Julia & Python now online, see arxiv.org/abs/2505.13192
October 28, 2025 at 6:27 PM
Reposted by Hsuan-Pei Huang
Really hoping bifurcations are the new manifolds. What a time to be alive 🥲
October 29, 2025 at 1:11 AM
Reposted by Hsuan-Pei Huang
Bifurcations—an underexplored concept in neuroscience—can help explain how small differences in neural circuits give rise to entirely novel functions, writes Xiao-Jing Wang.

#neuroskyence

www.thetransmitter.org/neural-dynam...
The missing half of the neurodynamical systems theory
Bifurcations—an underexplored concept in neuroscience—can help explain how small differences in neural circuits give rise to entirely novel functions.
www.thetransmitter.org
October 27, 2025 at 1:51 PM
Reposted by Hsuan-Pei Huang
Now accepting paper submissions for the “Neuro for AI & AI for Neuro: Towards Multi-Modal Natural Intelligence” workshop at #AAAI2026!

🔗 https://neuroai-multimodal-workshop.github.io/

@aaai.org
October 27, 2025 at 11:28 PM
Reposted by Hsuan-Pei Huang
What is consciousness, and could AI have it? It's an honour to be giving the 2025 Voltaire Lecture, this Hallowe'en (Fri 31/10), for @humanists.uk, at @conwayhall.bsky.social in London, 19:30-21:00 (also livestreamed) humanists.uk/events/volta... 👻
What is consciousness, and could AI have it? | The Voltaire Lecture 2025, with Professor Anil Seth
Professor Anil Seth is a neuroscientist, author, and public speaker who has pioneered research into the brain basis of consciousness for more than 20 years
humanists.uk
October 28, 2025 at 10:48 AM
Reposted by Hsuan-Pei Huang
Pleased to share new work with @sflippl.bsky.social @eberleoliver.bsky.social @thomasmcgee.bsky.social & undergrad interns at Institute for Pure and Applied Mathematics, UCLA.

Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models
www.arxiv.org/pdf/2510.15987

🧵1/n
October 27, 2025 at 6:13 PM