Bao Pham
baopham.bsky.social
Bao Pham
@baopham.bsky.social
PhD Student at RPI. Interested in Hopfield or Associative Memory models and Energy-based models.
Pinned
Diffusion models create beautiful novel images, but they can also memorize samples from the training set. How does this blending of features allow creating novel patterns? Our new work in Sci4DL workshop #neurips2024 shows that diffusion models behave like Dense Associative Memory networks.
Reposted by Bao Pham
I am excited to announce the call for papers for the New Frontiers in Associative Memories workshop at ICLR 2025. New architectures and algorithms, memory-augmented LLMs, energy-based models, Hopfield nets, AM and diffusion, and many other topics.

Website: nfam.vizhub.ai

@iclr-conf.bsky.social
January 14, 2025 at 4:56 PM
Reposted by Bao Pham
Most of the work on Dense Associative Memory (DenseAM) thus far has focused on the regime when the amount of data (number of memories) is below the critical memory storage capacity. We are beginning to explore the opposite limit, when the data is large.
Diffusion models create beautiful novel images, but they can also memorize samples from the training set. How does this blending of features allow creating novel patterns? Our new work in Sci4DL workshop #neurips2024 shows that diffusion models behave like Dense Associative Memory networks.
December 5, 2024 at 6:19 PM
Reposted by Bao Pham
Diffusion models create beautiful novel images, but they can also memorize samples from the training set. How does this blending of features allow creating novel patterns? Our new work in Sci4DL workshop #neurips2024 shows that diffusion models behave like Dense Associative Memory networks.
December 5, 2024 at 5:29 PM
The work is done in collaboration with Gabriel Raya, Matteo Negri, Mohammed J. Zaki, @lucamb.bsky.social , @krotov.bsky.social

Lastly, join us at Sci4DL workshop at #NeurIPS2024 to learn more!

We will be giving an oral presentation there!
December 5, 2024 at 5:29 PM
This work enables a positive perspective of spurious patterns. Unlike their usual perception in Associative Memory, such patterns play a role in signaling generalization in deep generative models, like diffusion models.

Here is a link to the paper: openreview.net/pdf?id=zVMMa....
openreview.net
December 5, 2024 at 5:29 PM
In the low training data regime (number of memories), diffusion models memorize. As the data size increases, spurious states emerge, signaling the blending of stored features into new combinations which enables generalization. This is how such models create novel outputs in the high data regime.
December 5, 2024 at 5:29 PM
Diffusion models create beautiful novel images, but they can also memorize samples from the training set. How does this blending of features allow creating novel patterns? Our new work in Sci4DL workshop #neurips2024 shows that diffusion models behave like Dense Associative Memory networks.
December 5, 2024 at 5:29 PM