Luca Scimeca
@lucascimeca.bsky.social
AI Research @ Mila | Harvard | Cambridge | Edinburgh
We explore how to train conditional generative models to sample molecular conformations from their Boltzmann distribution — using only a reward signal.
July 16, 2025 at 2:03 PM
We explore how to train conditional generative models to sample molecular conformations from their Boltzmann distribution — using only a reward signal.
📌 GenBio Workshop
Torsional-GFN: A Conditional Conformation Generator for Small Molecules
👥 Authors
Lena Néhale Ezzine*, Alexandra Volokhova*, Piotr Gaiński, Luca Scimeca, Emmanuel Bengio, Prudencio Tossou, Yoshua Bengio, and Alex Hernández-García
(* equal contribution)
Torsional-GFN: A Conditional Conformation Generator for Small Molecules
👥 Authors
Lena Néhale Ezzine*, Alexandra Volokhova*, Piotr Gaiński, Luca Scimeca, Emmanuel Bengio, Prudencio Tossou, Yoshua Bengio, and Alex Hernández-García
(* equal contribution)
July 16, 2025 at 2:03 PM
📌 GenBio Workshop
Torsional-GFN: A Conditional Conformation Generator for Small Molecules
👥 Authors
Lena Néhale Ezzine*, Alexandra Volokhova*, Piotr Gaiński, Luca Scimeca, Emmanuel Bengio, Prudencio Tossou, Yoshua Bengio, and Alex Hernández-García
(* equal contribution)
Torsional-GFN: A Conditional Conformation Generator for Small Molecules
👥 Authors
Lena Néhale Ezzine*, Alexandra Volokhova*, Piotr Gaiński, Luca Scimeca, Emmanuel Bengio, Prudencio Tossou, Yoshua Bengio, and Alex Hernández-García
(* equal contribution)
• Works out-of-the-box with large priors like StyleGAN3, NVAE, Stable Diffusion 3, and FoldFlow 2.
• Unifies constrained generation, RL-with-human-feedback, and protein design in a single framework.
• Outperforms both amortized data-space samplers and traditional MCMC across tasks.
• Unifies constrained generation, RL-with-human-feedback, and protein design in a single framework.
• Outperforms both amortized data-space samplers and traditional MCMC across tasks.
July 16, 2025 at 1:59 PM
• Works out-of-the-box with large priors like StyleGAN3, NVAE, Stable Diffusion 3, and FoldFlow 2.
• Unifies constrained generation, RL-with-human-feedback, and protein design in a single framework.
• Outperforms both amortized data-space samplers and traditional MCMC across tasks.
• Unifies constrained generation, RL-with-human-feedback, and protein design in a single framework.
• Outperforms both amortized data-space samplers and traditional MCMC across tasks.
• We show how to turn any pretrained generator (GAN, VAE, flow) into a conditional sampler by training a diffusion model directly in noise space.
• The diffusion sampler is trained with RL
• Noise-space posteriors are smoother, giving faster, more stable inference.
• The diffusion sampler is trained with RL
• Noise-space posteriors are smoother, giving faster, more stable inference.
July 16, 2025 at 1:59 PM
• We show how to turn any pretrained generator (GAN, VAE, flow) into a conditional sampler by training a diffusion model directly in noise space.
• The diffusion sampler is trained with RL
• Noise-space posteriors are smoother, giving faster, more stable inference.
• The diffusion sampler is trained with RL
• Noise-space posteriors are smoother, giving faster, more stable inference.
👥 Where you’ll find our work:
📌 Main Track
Outsourced Diffusion Sampling: Efficient Posterior Inference in Latent Spaces of Generative Models
👥 Authors
Siddarth Venkatraman, Mohsin Hasan, Minsu Kim, Luca Scimeca, Marcin Sendera, Yoshua Bengio, Glen Berseth, Nikolay Malkin
📌 Main Track
Outsourced Diffusion Sampling: Efficient Posterior Inference in Latent Spaces of Generative Models
👥 Authors
Siddarth Venkatraman, Mohsin Hasan, Minsu Kim, Luca Scimeca, Marcin Sendera, Yoshua Bengio, Glen Berseth, Nikolay Malkin
July 16, 2025 at 1:57 PM
👥 Where you’ll find our work:
📌 Main Track
Outsourced Diffusion Sampling: Efficient Posterior Inference in Latent Spaces of Generative Models
👥 Authors
Siddarth Venkatraman, Mohsin Hasan, Minsu Kim, Luca Scimeca, Marcin Sendera, Yoshua Bengio, Glen Berseth, Nikolay Malkin
📌 Main Track
Outsourced Diffusion Sampling: Efficient Posterior Inference in Latent Spaces of Generative Models
👥 Authors
Siddarth Venkatraman, Mohsin Hasan, Minsu Kim, Luca Scimeca, Marcin Sendera, Yoshua Bengio, Glen Berseth, Nikolay Malkin
🔹 Outsourced Diffusion Sampling: Efficient Posterior Inference in Latent Spaces of Generative Models.
📝 Authors: Siddarth Venkatraman, Mohsin Hasan, Minsu Kim, Luca Scimeca, …, Yoshua Bengio, Nikolay Malkin
paper: arxiv.org/pdf/2502.06999
📍 To be presented at FPI-ICLR2025 & ICLR 2025 DeLTa Workshops
📝 Authors: Siddarth Venkatraman, Mohsin Hasan, Minsu Kim, Luca Scimeca, …, Yoshua Bengio, Nikolay Malkin
paper: arxiv.org/pdf/2502.06999
📍 To be presented at FPI-ICLR2025 & ICLR 2025 DeLTa Workshops
April 23, 2025 at 1:29 AM
🔹 Outsourced Diffusion Sampling: Efficient Posterior Inference in Latent Spaces of Generative Models.
📝 Authors: Siddarth Venkatraman, Mohsin Hasan, Minsu Kim, Luca Scimeca, …, Yoshua Bengio, Nikolay Malkin
paper: arxiv.org/pdf/2502.06999
📍 To be presented at FPI-ICLR2025 & ICLR 2025 DeLTa Workshops
📝 Authors: Siddarth Venkatraman, Mohsin Hasan, Minsu Kim, Luca Scimeca, …, Yoshua Bengio, Nikolay Malkin
paper: arxiv.org/pdf/2502.06999
📍 To be presented at FPI-ICLR2025 & ICLR 2025 DeLTa Workshops
🔹 Solving Bayesian Inverse Problems with Diffusion Priors and Off-Policy RL.
📝 Authors: Luca Scimeca, Siddarth Venkatraman, Moksh Jain, Minsu Kim, Marcin Sendera, Mohsin Hasan, …, Yoshua Bengio, Glen Berseth, Nikolay Malkin
📍 To be presented at ICLR 2025 DeLTa Workshop
📝 Authors: Luca Scimeca, Siddarth Venkatraman, Moksh Jain, Minsu Kim, Marcin Sendera, Mohsin Hasan, …, Yoshua Bengio, Glen Berseth, Nikolay Malkin
📍 To be presented at ICLR 2025 DeLTa Workshop
April 23, 2025 at 1:28 AM
🔹 Solving Bayesian Inverse Problems with Diffusion Priors and Off-Policy RL.
📝 Authors: Luca Scimeca, Siddarth Venkatraman, Moksh Jain, Minsu Kim, Marcin Sendera, Mohsin Hasan, …, Yoshua Bengio, Glen Berseth, Nikolay Malkin
📍 To be presented at ICLR 2025 DeLTa Workshop
📝 Authors: Luca Scimeca, Siddarth Venkatraman, Moksh Jain, Minsu Kim, Marcin Sendera, Mohsin Hasan, …, Yoshua Bengio, Glen Berseth, Nikolay Malkin
📍 To be presented at ICLR 2025 DeLTa Workshop
🔹 Mitigating Shortcut Learning with Diffusion Counterfactuals and Diverse Ensembles.
📝 Authors: Luca Scimeca, Alexander Rubinstein, Damien Teney, Seong Joon Oh, Yoshua Bengio
paper: arxiv.org/pdf/2311.16176
📍 To be presented at SCSL @ ICLR 2025 Workshop
📝 Authors: Luca Scimeca, Alexander Rubinstein, Damien Teney, Seong Joon Oh, Yoshua Bengio
paper: arxiv.org/pdf/2311.16176
📍 To be presented at SCSL @ ICLR 2025 Workshop
arxiv.org
April 23, 2025 at 1:28 AM
🔹 Mitigating Shortcut Learning with Diffusion Counterfactuals and Diverse Ensembles.
📝 Authors: Luca Scimeca, Alexander Rubinstein, Damien Teney, Seong Joon Oh, Yoshua Bengio
paper: arxiv.org/pdf/2311.16176
📍 To be presented at SCSL @ ICLR 2025 Workshop
📝 Authors: Luca Scimeca, Alexander Rubinstein, Damien Teney, Seong Joon Oh, Yoshua Bengio
paper: arxiv.org/pdf/2311.16176
📍 To be presented at SCSL @ ICLR 2025 Workshop
🔹 Shaping Inductive Bias in Diffusion Models through Frequency-Based Noise Control.
📝 Authors: Thomas Jiralerspong, Berton Earnshaw, Jason Hartford, Yoshua Bengio, Luca Scimeca
paper: arxiv.org/pdf/2502.10236?
📍 To be presented at FPI-ICLR2025 & ICLR 2025 DeLTa Workshops
📝 Authors: Thomas Jiralerspong, Berton Earnshaw, Jason Hartford, Yoshua Bengio, Luca Scimeca
paper: arxiv.org/pdf/2502.10236?
📍 To be presented at FPI-ICLR2025 & ICLR 2025 DeLTa Workshops
arxiv.org
April 23, 2025 at 1:27 AM
🔹 Shaping Inductive Bias in Diffusion Models through Frequency-Based Noise Control.
📝 Authors: Thomas Jiralerspong, Berton Earnshaw, Jason Hartford, Yoshua Bengio, Luca Scimeca
paper: arxiv.org/pdf/2502.10236?
📍 To be presented at FPI-ICLR2025 & ICLR 2025 DeLTa Workshops
📝 Authors: Thomas Jiralerspong, Berton Earnshaw, Jason Hartford, Yoshua Bengio, Luca Scimeca
paper: arxiv.org/pdf/2502.10236?
📍 To be presented at FPI-ICLR2025 & ICLR 2025 DeLTa Workshops
Reposted by Luca Scimeca
Thank Alex for his great efforts and work ethic. Thank @damienteney.bsky.social and @lucascimeca.bsky.social for their continued help with this paper. We’ll humbly address the criticisms to improve it further for future opportunities.
January 23, 2025 at 10:21 PM
Thank Alex for his great efforts and work ethic. Thank @damienteney.bsky.social and @lucascimeca.bsky.social for their continued help with this paper. We’ll humbly address the criticisms to improve it further for future opportunities.
If you're attending, come check out our posters or feel free to reach out to connect during the conference!
Looking forward to insightful conversations and connecting with everyone; See you all at NeurIPS!
#NeurIPS2024 #NIPS24 #MachineLearning #DiffusionModels #Research #AI
Looking forward to insightful conversations and connecting with everyone; See you all at NeurIPS!
#NeurIPS2024 #NIPS24 #MachineLearning #DiffusionModels #Research #AI
December 12, 2024 at 6:28 AM
If you're attending, come check out our posters or feel free to reach out to connect during the conference!
Looking forward to insightful conversations and connecting with everyone; See you all at NeurIPS!
#NeurIPS2024 #NIPS24 #MachineLearning #DiffusionModels #Research #AI
Looking forward to insightful conversations and connecting with everyone; See you all at NeurIPS!
#NeurIPS2024 #NIPS24 #MachineLearning #DiffusionModels #Research #AI
Amortizing Intractable Inference in Diffusion Models for Bayesian Inverse Problems. Venkatraman, S., Jain, M., Scimeca, L., Kim, M., Sendera, M.,…, Bengio, Y., Malkin, K.
December 12, 2024 at 6:28 AM
Amortizing Intractable Inference in Diffusion Models for Bayesian Inverse Problems. Venkatraman, S., Jain, M., Scimeca, L., Kim, M., Sendera, M.,…, Bengio, Y., Malkin, K.
On Diffusion Models for Amortized Inference: Benchmarking and Improving Stochastic Control and Sampling. Sendera, M., Kim, M., Mittal, S., Lemos, P., Scimeca, L., Rector-Brooks, J., Adam, A., Bengio, Y., and Malkin, N.
arxiv.org/abs/2402.05098
arxiv.org/abs/2402.05098
Improved off-policy training of diffusion samplers
We study the problem of training diffusion models to sample from a distribution with a given unnormalized density or energy function. We benchmark several diffusion-structured inference methods, inclu...
arxiv.org
December 12, 2024 at 6:27 AM
On Diffusion Models for Amortized Inference: Benchmarking and Improving Stochastic Control and Sampling. Sendera, M., Kim, M., Mittal, S., Lemos, P., Scimeca, L., Rector-Brooks, J., Adam, A., Bengio, Y., and Malkin, N.
arxiv.org/abs/2402.05098
arxiv.org/abs/2402.05098
Amortizing Intractable Inference in Diffusion Models for Vision, Language, and Control. Venkatraman, S., Jain, M., Scimeca, L., Kim, M., Sendera, M.,…, Bengio, Y., Malkin, K.
arxiv.org/abs/2405.20971
LinkedIn
This link will take you to a page that’s not on LinkedIn
lnkd.in
December 12, 2024 at 6:25 AM
Amortizing Intractable Inference in Diffusion Models for Vision, Language, and Control. Venkatraman, S., Jain, M., Scimeca, L., Kim, M., Sendera, M.,…, Bengio, Y., Malkin, K.
arxiv.org/abs/2405.20971
Hi, can I be added to the pack? :)
December 12, 2024 at 6:19 AM
Hi, can I be added to the pack? :)