Kirill Neklyudov
banner
k-neklyudov.bsky.social
Kirill Neklyudov
@k-neklyudov.bsky.social
Assistant Professor at Mila and UdeM
https://necludov.github.io/
Reposted by Kirill Neklyudov
(1/n)🚨Train a model solving DFT for any geometry with almost no training data
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
📜 arxiv.org/abs/2506.01225
💻 github.com/majhas/self-...
June 10, 2025 at 7:49 PM
David is disrupting everone's neurips grind by putting out amazing works, such a dirty trick!
New paper accepted to ICML! We present a novel policy optimization algorithm for continuous control with a simple closed form which generalizes DDPG, SAC etc. to generic stochastic policies: Wasserstein Policy Optimization (WPO).
May 2, 2025 at 5:18 PM
Reposted by Kirill Neklyudov
Renormalisation is a central concept in modern physics. It describes how the dynamics of a system change at different scales. A great way to understand and visualise renormalisation is the Ising model

(some math, but one can follow without it )

1/13
April 21, 2025 at 7:29 AM
SuperDiff goes super big!
- Spotlight at #ICLR2025!🥳
- Stable Diffusion XL pipeline on HuggingFace huggingface.co/superdiff/su... made by Viktor Ohanesian
- New results for molecules in the camera-ready arxiv.org/abs/2412.17762
Let's celebrate with a prompt guessing game in the thread👇
March 6, 2025 at 9:06 PM
We've been sharing these projects during the year, but today, they have been accepted at #ICLR2025 (1-3) and #AISTATS2025 (4)
January 22, 2025 at 5:58 PM
🧵(1/7) Have you ever wanted to combine different pre-trained diffusion models but don't have time or data to retrain a new, bigger model?

🚀 Introducing SuperDiff 🦹‍♀️ – a principled method for efficiently combining multiple pre-trained diffusion models solely during inference!
December 28, 2024 at 2:32 PM
Reposted by Kirill Neklyudov
10 minutes ago

I am excited to share a perspective on the much-needed topic of hashtag#safety for hashtag#selfdrivinglaboratories. As the field progresses, understanding the challenges and gaps in building safe setups will be crucial for scaling up this technology!

doi.org/10.26434/che...
Steering towards safe self-driving laboratories
The past decade has witnessed remarkable advancements in autonomous systems, such as automobiles that are evolving from traditional vehicles to ones capable of navigating complex environments without ...
doi.org
December 23, 2024 at 5:39 PM
Reposted by Kirill Neklyudov
With some delay, JetFormer's *prequel* paper is finally out on arXiv: a radically simple ViT-based normalizing flow (NF) model that achieves SOTA results in its class.

Jet is one of the key components of JetFormer, deserving a standalone report. Let's unpack: 🧵⬇️
December 20, 2024 at 2:39 PM
Come join us in Singapore at #ICLR2025 to discuss the latest developments everywhere where Learning meets Sampling!
🔊 Super excited to announce the first ever Frontiers of Probabilistic Inference: Learning meets Sampling workshop at #ICLR2025 @iclr-conf.bsky.social!

🔗 website: sites.google.com/view/fpiwork...

🔥 Call for papers: sites.google.com/view/fpiwork...

more details in thread below👇 🧵
December 18, 2024 at 7:10 PM
We're presenting our spotlight paper on transition path sampling at #NeurIPS2024 this week! Learn how to speed up the conventional Monte Carlo approaches by orders of magnitude

Wed 11 Dec 4:30 pm #2606
arxiv.org/abs/2410.07974

first authors = {Yuanqi Du, Michael Plainer, @brekelmaniac.bsky.social}
December 10, 2024 at 2:05 AM
so happy to see that Action Matching finds its applications in physics, outperforming diffusion models and Flow Matching!

wonderful work by Jules Berman, Tobias Blickhan, and Benjamin Peherstorfer!
arxiv.org/abs/2410.12000
November 27, 2024 at 8:41 PM