Luca Ambrogioni
banner
lucamb.bsky.social
Luca Ambrogioni
@lucamb.bsky.social
Assistant professor in Machine Learning and Theoretical Neuroscience. Generative modeling and memory. Opinionated, often wrong.
Pinned
I am happy to share here our paper: "Spontaneous symmetry breaking in generative diffusion models", published at Neurips 2023.

We found that the generative capabilities of diffusion models are the result of a phase transition!

Preprint: arxiv.org/abs/2305.19693

Code: github.com/gabrielraya/...
Reposted by Luca Ambrogioni
The University of Notre Dame is hiring 5 tenure or tenure-track professors in Neuroscience, including Computational Neuroscience, across 4 departments.

Come join me at ND! Feel free to reach out with any questions.

And please share!

apply.interfolio.com/173031
Apply - Interfolio {{$ctrl.$state.data.pageTitle}} - Apply - Interfolio
apply.interfolio.com
September 3, 2025 at 5:26 PM
I am very happy to finally share something I have been working on and off for the past year:

"The Information Dynamics of Generative Diffusion"

This paper connects entropy production, divergence of vector fields and spontaneous symmetry breaking

link: arxiv.org/abs/2508.19897
September 2, 2025 at 4:40 PM
Reposted by Luca Ambrogioni
Students using AI to write their reports is like me going to the gym and getting a robot to lift my weights
June 11, 2025 at 5:09 PM
Generative decisions in diffusion models can be detected locally as symmetry breaking in the energy and globally as peaks in the conditional entropy rate.

The both corresponds to a (local or global) suppression of the quadratic potential (Hessian trace).
May 16, 2025 at 9:12 AM
Reposted by Luca Ambrogioni
🧠✨How do we rebuild our memories? In our new study, we show that hippocampal ripples kickstart a coordinated expansion of cortical activity that helps reconstruct past experiences.

We recorded iEEG from patients during memory retrieval... and found something really cool 👇(thread)
April 29, 2025 at 6:00 AM
In continuous generative diffusion, the conditional entropy rate is the constant term that separates the score matching and the denoising score matching loss

This can be directly interpreted as the information transfer (bit rate) from the state x_t and the final generation x_0.
May 2, 2025 at 1:32 PM
Decisions during generative diffusion are analogous to phase transitions in physics. They can be identified as peaks in the conditional entropy rate curve!
April 30, 2025 at 1:37 PM
Reposted by Luca Ambrogioni
I'd put these on the NeuroAI vision board:

@tyrellturing.bsky.social's Deep learning framework
www.nature.com/articles/s41...

@tonyzador.bsky.social's Next-gen AI through neuroAI
www.nature.com/articles/s41...

@adriendoerig.bsky.social's Neuroconnectionist framework
www.nature.com/articles/s41...
April 28, 2025 at 11:15 PM
Reposted by Luca Ambrogioni
Very excited that our work (together with my PhD student @gbarto.bsky.social and our collaborator Dmitry Vetrov) was recognized with a Best Paper Award at #AABI2025!

#ML #SDE #Diffusion #GenAI 🤖🧠
Congratulations to the #AABI2025 Workshop Track Outstanding Paper Award recipients!
April 30, 2025 at 12:02 AM
I am very happy to share our latest work on the information theory of generative diffusion:

"Entropic Time Schedulers for Generative Diffusion Models"

We find that the conditional entropy offers a natural data-dependent notion of time during generation

Link: arxiv.org/abs/2504.13612
April 29, 2025 at 1:17 PM
Reposted by Luca Ambrogioni
Flow Matching in a nutshell.
November 27, 2024 at 2:07 PM
Reposted by Luca Ambrogioni
I will be at #NeurIPS2024 in Vancouver. I’m looking for post-docs, and if you want to talk about post-doc opportunities, get in touch. 🤗

Here’s my current team at Aalto University: users.aalto.fi/~asolin/group/
December 8, 2024 at 10:56 AM
Reposted by Luca Ambrogioni
Can language models transcend the limitations of training data?

We train LMs on a formal grammar, then prompt them OUTSIDE of this grammar. We find that LMs often extrapolate logical rules and apply them OOD, too. Proof of a useful inductive bias.

Check it out at NeurIPS:

nips.cc/virtual/2024...
NeurIPS Poster Rule Extrapolation in Language Modeling: A Study of Compositional Generalization on OOD PromptsNeurIPS 2024
nips.cc
December 6, 2024 at 1:31 PM
Reposted by Luca Ambrogioni
Excited to speak at the ELLIS ML4Molecules Workshop 2024 in Berlin!

moleculediscovery.github.io/workshop2024/
December 6, 2024 at 8:08 AM
Can we please stop sharing posts that legitimate murder? Please.
December 6, 2024 at 11:14 AM
Reposted by Luca Ambrogioni
Our team at Google DeepMind is hiring Student Researchers for 2025!

🧑‍🔬 Interested in understanding reasoning capabilities of neural networks from first principles?
🧑‍🎓 Currently studying for a BS/MS/PhD?
🧑‍💻 Have solid engineering and research skills?

🌟 We want to hear from you! Details in thread.
December 5, 2024 at 11:08 PM
Reposted by Luca Ambrogioni
Diffusion models create beautiful novel images, but they can also memorize samples from the training set. How does this blending of features allow creating novel patterns? Our new work in Sci4DL workshop #neurips2024 shows that diffusion models behave like Dense Associative Memory networks.
December 5, 2024 at 5:29 PM
The naivete of these takes is always amusing

They could be equally applied to human beings, and they would work as well
December 4, 2024 at 2:12 PM
I have always been saying that diffusion = flow matching.

Is it supposed to be some sort of news now??
December 4, 2024 at 10:36 AM
Reposted by Luca Ambrogioni
I am very excited to share our new Neurips 2024 paper + package, Treeffuser! 🌳 We combine gradient-boosted trees with diffusion models for fast, flexible probabilistic predictions and well-calibrated uncertainty.

paper: arxiv.org/abs/2406.07658
repo: github.com/blei-lab/tre...

🧵(1/8)
December 2, 2024 at 9:48 PM
Reposted by Luca Ambrogioni
A common question nowadays: Which is better, diffusion or flow matching? 🤔

Our answer: They’re two sides of the same coin. We wrote a blog post to show how diffusion models and Gaussian flow matching are equivalent. That’s great: It means you can use them interchangeably.
December 2, 2024 at 6:45 PM
Reposted by Luca Ambrogioni
I'm still cautiously optimistic that we'll find a way to leverage Bayesian ideas in "Modern" AI without retrofitting. However, I'm very much an agnostic when it comes the philosophy of uncertainty (Bayes vs frequentist vs imprecise etc.)
November 30, 2024 at 8:04 AM
Reposted by Luca Ambrogioni
🌟 New Research Alert! 🌟
Excited to share our latest work (accepted to NeurIPS2024) on understanding working memory in multi-task RNN models using naturalistic stimuli!: with @takuito.bsky.social and @bashivan.bsky.social
#tweeprint below:
November 28, 2024 at 4:41 PM