yang
yang-chu.bsky.social
yang
@yang-chu.bsky.social
minds and machines, computational neuroscience, machine learning, computer architecture
Reposted by yang
Psst - neuromorphic folks. Did you know that you can solve the SHD dataset with 90% accuracy using only 22 kb of parameter memory by quantising weights and delays? Check out our preprint with @pengfei-sun.bsky.social and @danakarca.bsky.social, or read the TLDR below. 👇🤖🧠🧪 arxiv.org/abs/2510.27434
Exploiting heterogeneous delays for efficient computation in low-bit neural networks
Neural networks rely on learning synaptic weights. However, this overlooks other neural parameters that can also be learned and may be utilized by the brain. One such parameter is the delay: the brain...
arxiv.org
November 13, 2025 at 5:40 PM
Reposted by yang
New preprint! What happens if you add neuromodulation to spiking neural networks and let them go wild with it? TLDR: it can improve performance especially in challenging sensory processing tasks. Explainer thread below. 🤖🧠🧪 www.biorxiv.org/content/10.1...
Neuromodulation enhances dynamic sensory processing in spiking neural network models
Neuromodulators allow circuits to dynamically change their biophysical properties in a context-sensitive way. In addition to their role in learning, neuromodulators have been suggested to play a role ...
www.biorxiv.org
September 18, 2025 at 4:30 PM
Reposted by yang
How does the structure of a neural circuit shape its function?

@neuralreckoning.bsky.social & I explore this in our new preprint:

doi.org/10.1101/2025...

🤖🧠🧪

🧵1/9
August 1, 2025 at 8:27 AM
Reposted by yang
How can we best use AI in science?

Myself and 9 other research fellows from @imperial-ix.bsky.social use AI methods in domains from plant biology (🌱) to neuroscience (🧠) and particle physics (🎇).

Together we suggest 10 simple rules @plos.org 🧵

doi.org/10.1371/jour...
July 25, 2025 at 10:58 AM
Reposted by yang
New preprint for #neuromorphic and #SpikingNeuralNetwork folk (with @pengfei-sun.bsky.social).

arxiv.org/abs/2507.16043

Surrogate gradients are popular for training SNNs, but some worry whether they really learn complex temporal spike codes. TLDR: we tested this, and yes they can! 🧵👇

🤖🧠🧪
Beyond Rate Coding: Surrogate Gradients Enable Spike Timing Learning in Spiking Neural Networks
We investigate the extent to which Spiking Neural Networks (SNNs) trained with Surrogate Gradient Descent (Surrogate GD), with and without delay learning, can learn from precise spike timing beyond fi...
arxiv.org
July 24, 2025 at 5:03 PM
Reposted by yang
The REAL question on everyone's lips though...

Blog: gabrielbena.github.io/blog/2025/be...
Thread: bsky.app/profile/sola...
June 5, 2025 at 5:05 PM
Reposted by yang
New #Preprint Alert!! 🤖 🧠 🧪
What if we could train neural cellular automata to develop continuous universal computation through gradient descent ?! We have started to chart a path toward this goal in our new preprint:
arXiv: arxiv.org/abs/2505.13058
Blog: gabrielbena.github.io/blog/2025/be...
🧵⬇️
A Path to Universal Neural Cellular Automata | Gabriel Béna
Exploring how neural cellular automata can develop continuous universal computation through training by gradient descent
gabrielbena.github.io
June 4, 2025 at 6:25 PM
Reposted by yang
How do babies and blind people learn to localise sound without labelled data? We propose that innate mechanisms can provide coarse-grained error signals to boostrap learning. New preprint from @yang-chu.bsky.social. 🤖🧠🧪

arxiv.org/abs/2001.10605
Learning spatial hearing via innate mechanisms
The acoustic cues used by humans and other animals to localise sounds are subtle, and change during and after development. This means that we need to constantly relearn or recalibrate the auditory spa...
arxiv.org
April 24, 2025 at 4:57 PM