Cengiz Pehlevan
cpehlevan.bsky.social
Cengiz Pehlevan
@cpehlevan.bsky.social
theory of neural networks for natural and artificial intelligence
https://pehlevan.seas.harvard.edu/
Pinned
We collected lecture notes and blog posts by group members about recent topics in deep learning theory here. Hope it is useful!

pehlevan.seas.harvard.edu/resources-0
Reposted by Cengiz Pehlevan
Congratulations to #KempnerInstitute community members @msalbergo.bsky.social and @mweber.bsky.social — recipients of @schmidtsciences.bsky.social AI2050 Fellowships! 🎉
Discover their innovative research shaping the future of AI 👉 bit.ly/47Do4R3
#AI
Schmidt Sciences Awards Early Career Fellowships to Michael Albergo, Melanie Weber - Kempner Institute
Two Kempner Institute community members have received AI2050 Fellowships from Schmidt Sciences, a nonprofit organization aimed at accelerating scientific knowledge and breakthroughs. The AI2050 Progra...
bit.ly
November 6, 2025 at 8:10 PM
Reposted by Cengiz Pehlevan
First paper from the lab!
We propose a model that separates estimation of odor concentration and presence and map it on olfactory bulb circuits
Led by @chenjiang01.bsky.social and @mattyizhenghe.bsky.social joint work with @jzv.bsky.social and with @neurovenki.bsky.social @cpehlevan.bsky.social
November 4, 2025 at 3:40 PM
Reposted by Cengiz Pehlevan
Now in PRX: Theory linking connectivity structure to collective activity in nonlinear RNNs!
For neuro fans: conn. structure can be invisible in single neurons but shape pop. activity
For low-rank RNN fans: a theory of rank=O(N)
For physics fans: fluctuations around DMFT saddle⇒dimension of activity
Connectivity Structure and Dynamics of Nonlinear Recurrent Neural Networks
The structure of brain connectivity predicts collective neural activity, with a small number of connectivity features determining activity dimensionality, linking circuit architecture to network-level...
journals.aps.org
November 3, 2025 at 9:47 PM
Reposted by Cengiz Pehlevan
Applying to do a postdoc or PhD in theoretical ML or neuroscience this year? Consider joining my group (starting next Fall) at UT Austin!
POD Postdoc: oden.utexas.edu/programs-and... CSEM PhD: oden.utexas.edu/academics/pr...
October 23, 2025 at 9:36 PM
Reposted by Cengiz Pehlevan
William Qian, Cengiz Pehlevan: Discovering alternative solutions beyond the simplicity bias in recurrent neural networks https://arxiv.org/abs/2509.21504 https://arxiv.org/pdf/2509.21504 https://arxiv.org/html/2509.21504
September 29, 2025 at 6:50 AM
Reposted by Cengiz Pehlevan
⏳ Less than 1 day left until the Brain & Mind Workshop submission deadline!
🔍 Submit to our Finding or Tutorials track on OpenReview.
Findings track submission: openreview.net/group?id=Neu...
Tutorial track submission: openreview.net/group?id=Neu...
More info: data-brain-mind.github.io
NeurIPS 2025 Workshop DBM Findings
Welcome to the OpenReview homepage for NeurIPS 2025 Workshop DBM Findings
openreview.net
September 7, 2025 at 4:30 PM
Reposted by Cengiz Pehlevan
Since I'm back on BlueSky - with @frostedblakess.bsky.social and @cpehlevan.bsky.social we wrote a brief perspective on how ideas about summary statistics from the statistical physics of learning could potentially help inform neural data analysis... (1/2)
Frontiers | Summary statistics of learning link changing neural representations to behavior
How can we make sense of large-scale recordings of neural activity across learning? Theories of neural network learning with their origins in statistical phy...
www.frontiersin.org
September 4, 2025 at 6:30 PM
Reposted by Cengiz Pehlevan
Excited to share new computational work, led by @jzv.bsky.social, driven by Juan Carlos Fernandez del Castillo + contribution from Farhad Pashakanloo. We recover 3 core motifs in the olfactory system of evolutionarily distant animals using a biophysically-grounded model + efficient coding ideas!
Convergent motifs of early olfactory processing are recapitulated by layer-wise efficient coding
The architecture of early olfactory processing is a striking example of convergent evolution. Typically, a panel of broadly tuned receptors is selectively expressed in sensory neurons (each neuron exp...
www.biorxiv.org
September 4, 2025 at 4:51 PM
Reposted by Cengiz Pehlevan
Great to have this video about my @darpa.mil Artificial Intelligence Quantified (AIQ) program out! Very exciting program with absolutely fantastic teams. Stay tuned for some jaw dropping announcements!

www.youtube.com/watch?v=KVRF...
AIQ: Artificial Intelligence Quantified
YouTube video by DARPAtv
www.youtube.com
September 2, 2025 at 9:50 PM
Reposted by Cengiz Pehlevan
I am extremely grateful to be awarded the National University of Singapore (NUS) Development Grant, and to be a Young NUS Fellow! Look forward to collaborating with the Yong Loo Lin School of Medicine on exciting projects. This is my first grant and hopefully many more to come! #NUS #NeuroAI
August 27, 2025 at 2:31 PM
Reposted by Cengiz Pehlevan
Our new Simons Collaboration on the Physics of Learning and Neural Computation will develop powerful tools from #physics, #math, computer science and theoretical #neuroscience to understand how large neural networks learn, compute, scale, reason and imagine: www.simonsfoundation.org/2025/08/18/s...
Simons Foundation Launches Collaboration on the Physics of Learning and Neural Computation
Simons Foundation Launches Collaboration on the Physics of Learning and Neural Computation on Simons Foundation
www.simonsfoundation.org
August 19, 2025 at 2:43 PM
Reposted by Cengiz Pehlevan
If you work on artificial or natural intelligence and are finishing your PhD, consider applying for a Kempner research fellowship at Harvard:
kempnerinstitute.harvard.edu/kempner-inst...
Kempner Research Fellowship - Kempner Institute
The Kempner brings leading, early-stage postdoctoral scientists to Harvard to work on projects that advance the fundamental understanding of intelligence.
kempnerinstitute.harvard.edu
August 18, 2025 at 5:27 PM
Reposted by Cengiz Pehlevan
Congratulations to #KempnerInstitute associate faculty member @cpehlevan.bsky.social for joining the new
@simonsfoundation.org Simons Collaboration on the Physics of Learning and Neural Computation!

www.simonsfoundation.org/2025/08/18/s...

#AI #neuroscience #NeuroAI #physics #ANNs
Simons Foundation Launches Collaboration on the Physics of Learning and Neural Computation
Simons Foundation Launches Collaboration on the Physics of Learning and Neural Computation on Simons Foundation
www.simonsfoundation.org
August 18, 2025 at 6:57 PM
Reposted by Cengiz Pehlevan
Very excited to lead this new @simonsfoundation.org collaboration on the physics of learning and neural computation to develop powerful tools from physics, math, CS, stats, neuro and more to elucidate the scientific principles underlying AI. See our website for more: www.physicsoflearning.org
August 18, 2025 at 5:48 PM
Reposted by Cengiz Pehlevan
🚨 Excited to announce our #NeurIPS2025 Workshop: Data on the Brain & Mind

📣 Call for: Findings (4- or 8-page) + Tutorials tracks

🎙️ Speakers include @dyamins.bsky.social @lauragwilliams.bsky.social @cpehlevan.bsky.social

🌐 Learn more: data-brain-mind.github.io
August 4, 2025 at 3:28 PM
Reposted by Cengiz Pehlevan
The post is based on a paper written with Yue M. Lu., @jzv.bsky.social, Anindita Maiti and @cpehlevan.bsky.social evan.

Check it out now at PNAS:

doi.org/10.1073/pnas...

(2/2)
PNAS
Proceedings of the National Academy of Sciences (PNAS), a peer reviewed journal of the National Academy of Sciences (NAS) - an authoritative source of high-impact, original research that broadly spans...
doi.org
July 28, 2025 at 7:26 PM
Reposted by Cengiz Pehlevan
New in the #DeeperLearningBlog: the #KempnerInstitute's Mary Letey presents work recently published in PNAS that offers generalizable insights into in-context learning (ICL) in an analytically-solvable model architecture.

bit.ly/4lPK15p

#AI @pnas.org

(1/2)
Solvable Model of In-Context Learning Using Linear Attention - Kempner Institute
Attention-based architectures are a powerful force in modern AI. In particular, the emergence of in-context learning enables these models to perform tasks far beyond the original next-token prediction...
kempnerinstitute.harvard.edu
July 28, 2025 at 7:24 PM
Reposted by Cengiz Pehlevan
At #ICML2025, presenting work done at @flatironinstitute.org w Matt Smart and @albertobietti.bsky.social on in-context denoising (arxiv.org/abs/2502.05164). Come to Matt’s oral, Thursday, 4:15-4:30 PM, West Ballroom A, and see us right after at poster #E-3207, 4:30-7:00 PM, East Exhibition Hall A-B.
In-context denoising with one-layer transformers: connections between attention and associative memory retrieval
We introduce in-context denoising, a task that refines the connection between attention-based architectures and dense associative memory (DAM) networks, also known as modern Hopfield networks. Using a...
arxiv.org
July 16, 2025 at 6:37 PM
Great to see this one finally out in PNAS! Asymptotic theory of in-context learning by linear attention www.pnas.org/doi/10.1073/... Many thanks to my amazing co-authors Yue Lu, Mary Letey, Jacob Zavatone-Veth @jzv.bsky.social and Anindita Maiti!
Asymptotic theory of in-context learning by linear attention | PNAS
Transformers have a remarkable ability to learn and execute tasks based on examples provided within the input itself, without explicit prior traini...
www.pnas.org
July 11, 2025 at 7:33 AM
Reposted by Cengiz Pehlevan
#eNeuro: Obeid and Miller identify distinct neural computations in the primary visual cortex that explain how surrounding context suppresses perception of visual figures and features. €ª@harvardseas.bsky.social‬ vist.ly/3n6tfb2
June 13, 2025 at 11:32 PM
Reposted by Cengiz Pehlevan
📣 Grad students and postdocs in computational and theoretical neuroscience: please consider applying for the 2025 Flatiron Institute Junior Theoretical Neuroscience Workshop! All expenses are covered. Apply by April 14. jtnworkshop2025.flatironinstitute.org
April 9, 2025 at 4:11 PM
Reposted by Cengiz Pehlevan
New preprint! We trained an RNN using RL to solve a decision making task used to characterize suboptimal decision making by Schizophrenic patients. First project exploring comp psych models, thanks to @adam-manoogian.bsky.social @shawnrhoadsphd.bsky.social @bqian.bsky.social @cpehlevan.bsky.social
Neurocomputational underpinnings of suboptimal beliefs in recurrent neural network-based agents https://www.biorxiv.org/content/10.1101/2025.03.13.642273v1
March 27, 2025 at 5:00 PM
Reposted by Cengiz Pehlevan
Honoured to have been selected as a #SloanFellow
Thankful for all the support from family, mentors, collaborators, colleagues and students along the way!
@sloanfoundation.bsky.social
Professors Paul Masset and David Rolnick awarded 2025 Alfred P. Sloan Research Fellowships ✨ Their work ‘exemplifies McGill’s leadership in AI research and innovation.’ Since 1955, 33 McGill faculty members have received this honour.

Read more: mcgill.ca/x/iZ4
February 18, 2025 at 10:13 PM
Reposted by Cengiz Pehlevan
(1/30) New preprint! "Symmetries and continuous attractors in disordered neural circuits" with Larry Abbott and Haim Sompolinsky
bioRxiv: www.biorxiv.org/content/10.1...
Symmetries and Continuous Attractors in Disordered Neural Circuits
A major challenge in neuroscience is reconciling idealized theoretical models with complex, heterogeneous experimental data. We address this challenge through the lens of continuous-attractor networks...
www.biorxiv.org
January 29, 2025 at 6:26 PM