Jonathan Kadmon
banner
kadmonj.bsky.social
Jonathan Kadmon
@kadmonj.bsky.social
Assistant professor of theoretical neuroscience @ELSCbrain. My opinions are deterministic activity patterns in my neocortex. http://neuro-theory.org
Reposted by Jonathan Kadmon
🚨New spotlight paper at Neurips 2025🚨

We show that in sign-diverse networks, inherent non-gradient “curl” terms arise, and can, depending on network architecture, destabilize gradient-descent solutions or paradoxically accelerate learning beyond pure gradient flow.

🧵⬇️

www.arxiv.org/abs/2510.02765
Curl Descent: Non-Gradient Learning Dynamics with Sign-Diverse Plasticity
Gradient-based algorithms are a cornerstone of artificial neural network training, yet it remains unclear whether biological neural networks use similar gradient-based strategies during learning. Expe...
www.arxiv.org
October 10, 2025 at 5:53 PM
In the past year and a half, we have been intensively protesting and fighting the Israeli government in an attempt to stop the war, secure the release of all hostages, and prevent the ongoing humanitarian crisis. Until now, our protests have primarily been focused internally.
July 27, 2025 at 8:36 PM
Reposted by Jonathan Kadmon
If you're Jewish please sign this from Jews for Food Aid for People in Gaza:
foodaidforgaza.org

and anyone can donate to the organization they recommend, Gaza soup kitchen, which serves up to 6000 meals daily: givebutter.com/AReeXq
Jews for Food Aid for Gaza
Jewish people support food aid for families in Gaza and an immediate end to the Israeli government’s food aid blockade. #JewsForFoodAidForPeopleInGaza
foodaidforgaza.org
May 16, 2025 at 3:54 AM
Maher will be presenting this work at @cosynemeeting.bsky.social. If you are interested, come check it out.
Friday 2-083.
March 27, 2025 at 8:15 PM
(1/6) Excited to share a new preprint from our lab! Can large, deep nonlinear neural networks trained with indirect, low-dimensional error signals compete with full-fledged backpropagation? Tl;dr: Yes! arxiv.org/abs/2502.20580.
Training Large Neural Networks With Low-Dimensional Error Feedback
Training deep neural networks typically relies on backpropagating high dimensional error signals a computationally intensive process with little evidence supporting its implementation in the brain. Ho...
arxiv.org
March 23, 2025 at 9:23 AM
Reposted by Jonathan Kadmon
🚨 Paper Alert! 🚨
1/n Thrilled to share our latest research, now published in Nature Communications! 🎉 This study dives deep into how the cerebellum shapes cortical preparatory activity during motor adaptation.
www.nature.com/articles/s41...
#neuroskyence #motorcontrol #cerebellum #motoradaptation
Cerebellar output shapes cortical preparatory activity during motor adaptation - Nature Communications
Functional role of the cerebellum in motor adaptation is not fully understood. The authors show that cerebellar signals act as low-dimensional feedback which constrains the structure of the preparator...
www.nature.com
March 22, 2025 at 6:35 PM
Reposted by Jonathan Kadmon
Recruiting postdocs! Please get in touch if you wanna discuss projects on biologically plausible learning in RNNs through TUM's global postdoctoral fellowship. I'll be at Cosyne this year if you wanna talk more. www.tum.de/ueber-die-tu...
TUM Global Postdoc Fellowship - TUM
www.tum.de
March 7, 2025 at 2:47 PM
Reposted by Jonathan Kadmon
Re-posting is appreciated: We have a fully funded PhD position in CMC lab @cmc-lab.bsky.social (at @tudresden_de). You can use forms.gle/qiAv5NZ871kv... to send your application and find more information. Deadline is April 30. Find more about CMC lab: cmclab.org and email me if you have questions.
forms.gle
February 20, 2025 at 2:50 PM
New preprint: Neural Mechanisms of Flexible Perceptual Inference!
How does the brain simultaneously infer both context and meaning from the same stimuli, enabling rapid, flexible adaptation in dynamic environments? www.biorxiv.org/content/10.1...
Led by John Scharcz and @janbauer.bsky.social
Neural mechanisms of flexible perceptual inference
What seems obvious in one context can take on an entirely different meaning if that context shifts. While context-dependent inference has been widely studied, a fundamental question remains: how does ...
www.biorxiv.org
February 17, 2025 at 1:43 PM