majhas.github.io
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
📜 arxiv.org/abs/2506.01225
💻 github.com/majhas/self-...
Introducing Self-Refining Training for Amortized DFT: a variational method that predicts ground-state solutions across geometries and generates its own training data!
📜 arxiv.org/abs/2506.01225
💻 github.com/majhas/self-...
How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
🧵1/7
How do we build neural decoders that are:
⚡️ fast enough for real-time use
🎯 accurate across diverse tasks
🌍 generalizable to new sessions, subjects, and even species?
We present POSSM, a hybrid SSM architecture that optimizes for all three of these axes!
🧵1/7
🚀 Introducing SuperDiff 🦹♀️ – a principled method for efficiently combining multiple pre-trained diffusion models solely during inference!
🚀 Introducing SuperDiff 🦹♀️ – a principled method for efficiently combining multiple pre-trained diffusion models solely during inference!
🔗 website: sites.google.com/view/fpiwork...
🔥 Call for papers: sites.google.com/view/fpiwork...
more details in thread below👇 🧵
🔗 website: sites.google.com/view/fpiwork...
🔥 Call for papers: sites.google.com/view/fpiwork...
more details in thread below👇 🧵
come see us at @neuripsconf.bsky.social !!
come see us at @neuripsconf.bsky.social !!
🔖Paper: arxiv.org/abs/2410.22388
🔗Github: github.com/shenoynikhil...