Vincent Fortuin
banner
vincefort.bsky.social
Vincent Fortuin
@vincefort.bsky.social
PI at Helmholtz AI, Faculty at TU Munich, Fellow at Zuse School for reliable AI, Branco Weiss Fellow, ELLIS Scholar.
Prev: Cambridge CBL, St John's College, ETH Zürich, Google Brain, Microsoft Research, Disney Research.
https://fortuin.github.io/
I’m greatly enjoying the setting of this year’s #AABI symposium, thanks to NTU Singapore for hosting us!
April 29, 2025 at 6:28 AM
Merry Bayesmas to those who are celebrating 🎅🏻
December 25, 2024 at 1:56 PM
This was a really awesome collaboration with Tristan Cinquin, Marvin Pförtner, @philipphennig.bsky.social, and Robert Bamler!
December 12, 2024 at 11:47 AM
🌊 We show that this can improve performance over standard Laplace with weight-space priors in real-world scientific tasks, such as this ocean current modeling problem
December 12, 2024 at 11:47 AM
💡 In our work, we propose to use the Laplace approximation in function space! This is mathematically principled (after a bit of measure theory) and can be efficiently implemented using matrix-free linear algebra 🚀
December 12, 2024 at 11:47 AM
Very honored to have been elected an @ellis.eu Scholar in the program for Robust Machine Learning: ellis.eu/programs/rob...

What better way to celebrate it than to speak at the ELLIS UnConference in Bad Teinach?
December 10, 2024 at 10:04 PM
This was a super fun collaboration, led by my master's student Rayen Dhahri with Alex Immer,
@bertrand-sharp.bsky.social, and Stephan Günnemann
December 6, 2024 at 3:05 PM
🤔 How efficient can sparsification be without sacrificing performance?

☝️We showcase significant computational savings while retaining high performance across different sparsity levels:
📈 Achieves up to 20x computational savings with minimal accuracy degradation
December 6, 2024 at 2:59 PM
Moreover, we introduce Optimal Posterior Damage (OPD), a robust pruning criterion that:

⚡ Reuses posterior precision to guide efficient sparsification
⚡ Covers structured & unstructured pruning seamlessly
⚡ Scales well across various architectures and sparsity levels
December 6, 2024 at 2:59 PM
📢 Excited to present our work at #NeurIPS2024!
📄 "Shaving Weights with Occam's Razor: Bayesian Sparsification for Neural Networks using the Marginal Likelihood"
Read it here: arxiv.org/abs/2402.15978

Details in 🧵
December 6, 2024 at 2:59 PM