Surya Ganguli
banner
suryaganguli.bsky.social
Surya Ganguli
@suryaganguli.bsky.social
Professor of Applied Physics at Stanford | Venture Partner a16z | Research in AI, Neuroscience, Physics
Our new paper "High-capacity associative memory in a quantum-optical spin glass." Achieves 7-fold increased capacity thru a quantum optical analog of short term plasticity from neuroscience. Atoms (neurons) couple to motion and photons (synapses)!
arxiv.org/abs/2509.12202
September 16, 2025 at 4:55 PM
The best part of this job is seeing students graduate and launch their careers! Congrats to Feng Chen, Atsushi Yamamura, Tamra Nebabu, Linnie Wharton and Daniel Kunin. They are all going on to top positions across artificial intelligence, medicine, and physics. Proud of you!
June 17, 2025 at 5:48 PM
Many recent posts on free energy. Here is a summary from my class “Statistical mechanics of learning and computation” on the many relations between free energy, KL divergence, large deviation theory, entropy, Boltzmann distribution, cumulants, Legendre duality, saddle points, fluctuation-response…
May 2, 2025 at 7:22 PM
:)
March 10, 2025 at 6:02 PM
ChatGPT to explain Einstein’s work in the style of Shakespeare (open ended question) and it gave me the following poem. I could not do better but if a human did this I would say they were creative. It feels wrong to say GPT is not creative just because we understand it’s search process but not ours
January 1, 2025 at 1:06 AM
While our theory works for local convolutional diffusion models, it still partially explains the outputs of more nonlocal diffusion models with self-attention, and reveals in intriguing role for attention in carving out semantic coherence from local patch mosaics.
December 31, 2024 at 4:54 PM
It also explains why diffusion models make mistakes in fine spatial features (fingers,limbs) due to excessive locality at late times in the reverse diffusion process. Trained diffusion models do this on FashionMNIST (3 limbed pants and shirts) and our theory reproduces it
December 31, 2024 at 4:54 PM
Moreover, it explains how creative new diffusion model outputs, far from the training data, are constructed by mixing and matching different local training set image patches at different locations in the new output, yielding a local patch mosaic model of creativity.
December 31, 2024 at 4:54 PM
Our new paper! "Analytic theory of creativity in convolutional diffusion models" lead expertly by @masonkamb.bsky.social
arxiv.org/abs/2412.20292
Our closed-form theory needs no training, is mechanistically interpretable & accurately predicts diffusion model outputs with high median r^2~0.9
December 31, 2024 at 4:54 PM
December 14, 2024 at 6:53 AM
What it looks like when it grows up?
November 26, 2024 at 10:59 PM