Analog dynamics may help, but the theoretical scaling of neurons mostly come from event-driven parallelism.
arxiv.org/abs/2507.17886
Analog dynamics may help, but the theoretical scaling of neurons mostly come from event-driven parallelism.
arxiv.org/abs/2507.17886
#SfN25 #SfN2025 @sfn.org
(Or check it out for yourself 👉 nwb4edu.github.io )
#SfN25 #SfN2025 @sfn.org
(Or check it out for yourself 👉 nwb4edu.github.io )
In a sense yes, but does network science help us understand the brain as a complex system? Intriguing paper.
If anything the paper has 800+ refs!
#neuroskyence #complexsystems
doi.org/10.1016/j.pl...
In a sense yes, but does network science help us understand the brain as a complex system? Intriguing paper.
If anything the paper has 800+ refs!
#neuroskyence #complexsystems
doi.org/10.1016/j.pl...
www.nature.com/articles/s41...
www.nature.com/articles/s41...
Our paper lead by James Malkin on energetics of synaptic precision: elifesciences.org/articles/92595 (contains some good refs to other papers too)
Our paper lead by James Malkin on energetics of synaptic precision: elifesciences.org/articles/92595 (contains some good refs to other papers too)
link.springer.com/article/10.1...
"Cognition all the way down 2.0: neuroscience beyond neurons in the diverse intelligence era"
🧪
link.springer.com/article/10.1...
"Cognition all the way down 2.0: neuroscience beyond neurons in the diverse intelligence era"
🧪
Previously, we show that neural representations for control of movement are largely distinct following supervised or reinforcement learning. The latter most closely matches NHP recordings.
We used a combination of neural recordings & modelling to show that RL yields neural dynamics closer to biology, with useful continual learning properties.
www.biorxiv.org/content/10.1...
In 2017, #ScienceBooks toured the dynamics that established her as "the most iconic of all female scientists." https://scim.ag/47tkKYA
In 2017, #ScienceBooks toured the dynamics that established her as "the most iconic of all female scientists." https://scim.ag/47tkKYA
Linear attention has cheap, unbounded memory but low precision, whereas softmax attention has expensive, bounded memory but high precision. These can be combined to build better transformers.
arxiv.org/abs/2506.00744
Linear attention has cheap, unbounded memory but low precision, whereas softmax attention has expensive, bounded memory but high precision. These can be combined to build better transformers.
arxiv.org/abs/2506.00744
Work with @summerfieldlab.bsky.social, @tsonj.bsky.social, Lukas Braun and Jan Grohn
www.nature.com/articles/s41...
Work with @summerfieldlab.bsky.social, @tsonj.bsky.social, Lukas Braun and Jan Grohn
www.nature.com/articles/s41...
www.nature.com/articles/s41...
#neuroskyence
www.thetransmitter.org/neural-dynam...
#neuroskyence
www.thetransmitter.org/neural-dynam...
🔗 https://neuroai-multimodal-workshop.github.io/
@aaai.org
🔗 https://neuroai-multimodal-workshop.github.io/
@aaai.org
Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models
www.arxiv.org/pdf/2510.15987
🧵1/n
Algorithmic Primitives and Compositional Geometry of Reasoning in Language Models
www.arxiv.org/pdf/2510.15987
🧵1/n