Kirill Neklyudov
banner
k-neklyudov.bsky.social
Kirill Neklyudov
@k-neklyudov.bsky.social
Assistant Professor at Mila and UdeM
https://necludov.github.io/
March 6, 2025 at 9:06 PM
March 6, 2025 at 9:06 PM
March 6, 2025 at 9:06 PM
March 6, 2025 at 9:06 PM
March 6, 2025 at 9:06 PM
March 6, 2025 at 9:06 PM
Every image was generated using SuperDiff for SDXL with two different prompts. Now, what are the prompts?🤔
March 6, 2025 at 9:06 PM
4. Diffusion Models as Constrained Samplers for Optimization with Unknown Constraints arxiv.org/abs/2402.18012
Diffusion Models as Constrained Samplers for Optimization with Unknown Constraints
Addressing real-world optimization problems becomes particularly challenging when analytic objective functions or constraints are unavailable. While numerous studies have addressed the issue of unknow...
arxiv.org
January 22, 2025 at 5:58 PM

3. Efficient Evolutionary Search Over Chemical Space with Large Language Models arxiv.org/abs/2406.16976
Efficient Evolutionary Search Over Chemical Space with Large Language Models
Molecular discovery, when formulated as an optimization problem, presents significant computational challenges because optimization objectives can be non-differentiable. Evolutionary Algorithms (EAs),...
arxiv.org
January 22, 2025 at 5:58 PM

2. Meta Flow Matching: Integrating Vector Fields on the Wasserstein Manifold arxiv.org/abs/2408.14608
Meta Flow Matching: Integrating Vector Fields on the Wasserstein Manifold
Numerous biological and physical processes can be modeled as systems of interacting entities evolving continuously over time, e.g. the dynamics of communicating cells or physical particles. Learning t...
arxiv.org
January 22, 2025 at 5:58 PM
1. The Superposition of Diffusion Models Using the Itô Density Estimator arxiv.org/abs/2412.17762
The Superposition of Diffusion Models Using the Itô Density Estimator
The Cambrian explosion of easily accessible pre-trained diffusion models suggests a demand for methods that combine multiple different pre-trained diffusion models without incurring the significant co...
arxiv.org
January 22, 2025 at 5:58 PM
🧵(7/7) The main result that unlocks all these possibilities is our new Itô density estimator, an efficient way to estimate the density of the generated samples for an already-trained diffusion model (assuming that we know the score). It does not require any extra computations, just the forward pass!
December 28, 2024 at 2:32 PM
🧵(6/7) We try out SuperDiff on generating images with #StableDiffusion by superimposing two prompts so that the image satisfies both. Ever wondered what a waffle cone would look like if it doubled as a volcano? Check out our paper! You’ll find marvellous new creatures in there such as an otter-duck
December 28, 2024 at 2:32 PM
🧵(5/7) We test our model for unconditional de novo protein generation, where we superimpose two diffusion models: Proteus generates more designable and novel proteins, while FrameDiff generates more diverse proteins. SuperDiff combines them to generate designable and novel and diverse proteins!
December 28, 2024 at 2:32 PM
🧵(4/7) Here’s a 2D example for intuition: given two already trained models, we combine their outputs (vector fields) based on estimated densities, allowing us to generate samples from all modes (e.g. for continual learning) or from the surface of equal densities (e.g. for concept interpolation).
December 28, 2024 at 2:32 PM
🧵(2/7)We provide a new approach for estimating density without touching the divergence. This gives us the control to easily interpolate concepts (logical AND) or mix densities (logical OR), allowing us to create one-of-a-kind generations! ⚡🌀🤗
The Superposition of Diffusion Models Using the Itô Density Estimator
The Cambrian explosion of easily accessible pre-trained diffusion models suggests a demand for methods that combine multiple different pre-trained diffusion models without incurring the significant co...
arxiv.org
December 28, 2024 at 2:32 PM