Aaron Milstein
banner
neurosutras.bsky.social
Aaron Milstein
@neurosutras.bsky.social
Computational neuroscientist studying learning and memory in health and disease. Dad, yogi, Assistant Professor at Rutgers University.
October 8, 2025 at 2:36 AM
In 10 weeks, 12 undergraduates from all around the country completed a simulation of the excitement, rigor, and social bonding experience of #neuroscience grad school @rwjms.bsky.social . Their final research presentations were incredible! I'm so proud! #neuroSURP
August 2, 2025 at 1:42 AM
Excited for my student Sam Gritz to present his poster, "Spatially tuned inhibition is not required to explain place field properties in models with NMDARs" at the Inhibition in the CNS #GRC. Then my talk Wed, "Cellular and subcellular specialization enable biology constrained deep learning." 🧠💻
July 7, 2025 at 2:20 AM
June 10, 2025 at 1:55 AM
We found that dendritic target propagation is compatible with multiple experimentally- and theoretically-supported synaptic plasticity rules, including BCM, Temporally Contrastive Hebbian learning, and BTSP! (13/15)
May 27, 2025 at 6:46 PM
Interestingly, the dendritic target propagation algorithm performs better with Hebbian plasticity in dendrite-targeting interneurons, making a strong experimentally testable prediction that dendritic inhibition should adapt during learning. (12/15)
May 27, 2025 at 6:45 PM
It works by locally computing error in excitatory neuron dendrites as the difference between top-down excitation and lateral inhibition. Unexpected supervisory input results in dendritic calcium spikes and plasticity in the right neurons in the right layers! (11/15)
May 27, 2025 at 6:45 PM
We designed an algorithm called “dendritic target propagation” that perform comparably to backprop, and outperforms unsupervised Hebbian learning on the MNIST handwritten digit classification task. (10/15)
May 27, 2025 at 6:45 PM
Synaptic weights in standard ANNs are trained with an algorithm called “backpropagation of error.” Can the brain approximate gradient descent with cellular and subcellular specialization? (9/15)
May 27, 2025 at 6:44 PM
So, to explore the implications of BTSP for top-down modulation of learning in the brain, we need to build multi-layer artificial neural networks that have dendrites and dendrite-targeting inhibitory neurons! We call these “dendritic EIANNs”. (6/15)
May 27, 2025 at 6:44 PM
New #NeuroAI #compneurosky preprint! To better understand how target-directed learning works in the brain, we sought to engineer an artificial neural network capable of solving complex image classification tasks that comprises only experimentally-supported biological building blocks. (1/15)
May 27, 2025 at 6:40 PM
I stand with science. No ceding global leadership in technology and medicine to China or Russia. Cures for our kids. Treatments for our vulnerable and first responders. Sharing the joy of discovery with our students and trainees. Contributing to the development of civil society. Democracy now.
March 7, 2025 at 7:25 PM
March 4, 2025 at 11:08 AM
Ohhhhh! It finally clicked for me. THAT's why they're pushing policy illegally without waiting for Congress. Because they know Congress moves too slowly and April 1 is rapidly approaching. They won't have Congress in a bag much longer.
February 17, 2025 at 1:31 AM
2025
December 28, 2024 at 2:59 AM
Still no support for mpi parallelism?
December 21, 2024 at 2:48 AM
I think the point is it isn't taught in detail or tested on exams. It does strike me that "learning by observation" is understudied in neuroscience. Also, a small subset of neuroscientists know what "blocking" is:
December 15, 2024 at 4:02 PM
December 15, 2024 at 12:01 PM
October 15, 2023 at 2:30 AM
October 15, 2023 at 2:28 AM
We then experimented with a slightly more complex task: learning to find a reward in a 2D environment with obstacles. Using our bio-inspired learning rule, we were able to learn an efficient path to reward in fewer trials than the classical temporal difference and Hebbian learning rules. (8/9)
October 6, 2023 at 1:57 AM
The SR is an influential idea in reinforcement learning and serves as a cognitive mechanism to predict future states. We noticed that the intrinsic asymmetry and long timescale of the BTSP rule make it naturally well-suited for this! We demo this first in a simple linear track. (7/9)
October 6, 2023 at 1:54 AM
We then combined this setup with some ultraslow VO2 devices (with seconds-long decay) to keep track of slow intracellular biochemical eligibility traces (ET) and instructive signals (IS) required for behavioral-timescale plasticity (BTSP), a biological mechanism for one-shot learning! (5/9)
October 6, 2023 at 1:50 AM
Starting with the short timescales, we used the flexible conductance decay of VO2 as a re-set current to control a "soft" refractory period in simple model neurons (leaky-integrate-and-fire, modeled as RC circuits), giving us fast somatic spikes and slow dendritic spikes. (4/9)
October 6, 2023 at 1:46 AM
This kind of exponential decay appears very often in neuroscience, so we decided to exploit the material properties to emulate learning in neurons. The tuning works by changing the device temperature to get decay time-constants from (cold) sub-millisecond to (hot) seconds long. (3/9)
October 6, 2023 at 1:44 AM