Jonathan Cornford
repromancer.bsky.social
Jonathan Cornford
@repromancer.bsky.social
Lecturer (Assistant Professor) in Computational Neuroscience at Leeds University School of Computer Science.

Combining insights from biological and artificial intelligence to develop resource-efficient AI.
Come by Poster 068 to learn about why comp neuro studies should use exponentiated gradient descent!
March 29, 2025 at 5:22 PM
Biophysically accurate learning at scale. 🌐 Testing the multiplicative EG update in neuron models shows it adheres to realistic synaptic dynamics. This accuracy highlights EG as a promising candidate for creating models that reflect real neural circuits. 8/12
October 28, 2024 at 5:26 PM
Sparse and relevant inputs. EG outperforms GD in scenarios with many irrelevant signals by weighting relevant inputs more heavily. This leads to faster, smoother learning and aligns better with how neurons handle background noise. 7/12 #SparseCoding
October 28, 2024 at 5:25 PM
Learning from noisy inputs. In tasks requiring selective focus, EG-trained networks ignore irrelevant signals more effectively than GD-trained ones, excelling at biologically relevant sensorimotor control. 6/12 #AI #Neuroscience
October 28, 2024 at 5:25 PM
Faster learning, less retraining 🏃 When synapses are pruned and then relearning is required, EG networks adapt more quickly and efficiently than GD networks, showing better resilience and learning stability. 5/12 #MachineLearning
October 28, 2024 at 5:24 PM
Resilience to synaptic pruning EG-trained networks maintain accuracy better than GD when synapses are pruned—a critical process in real neural networks during development and sleep. 4/12 #LearningAlgorithms #Neuroplasticity
October 28, 2024 at 5:23 PM
Log-normal synaptic weights: EG gets it right! 🌱 Trained with EG, networks naturally develop log-normal weight distributions observed in biological brains. This weight distribution stability contrasts with GD, which drifts from log-normal forms post-training. 3/12
October 28, 2024 at 5:21 PM
EG respects neural biology better than GD 🧠 Unlike GD, which often flips synaptic weights between excitatory and inhibitory, EG adheres to Dale’s law, keeping weights positive and aligned with biology. 2/12 #NeuroAI
October 28, 2024 at 5:20 PM
Why does #compneuro need new learning methods? ANN models are usually trained with Gradient Descent (GD), which violates biological realities like Dale’s law and log-normal weights. Here we describe a superior learning algorithm for comp neuro: Exponentiated Gradients (EG)! 1/12 #neuroscience 🧪
October 28, 2024 at 5:18 PM