Daniel Cremers
@dcremers.bsky.social
Professor of Computer Vision and AI at TU Munich, Director of the Munich Center for Machine Learning mcml.ai and of ELLIS Munich ellismunich.ai
cvg.cit.tum.de
cvg.cit.tum.de
Reposted by Daniel Cremers
🤗 I’m excited to share our recent work: TwoSquared: 4D Reconstruction from 2D Image Pairs.
🔥 Our method produces geometry, texture-consistent, and physically plausible 4D reconstructions
📰 Check our project page sangluisme.github.io/TwoSquared/
❤️ @ricmarin.bsky.social @dcremers.bsky.social
🔥 Our method produces geometry, texture-consistent, and physically plausible 4D reconstructions
📰 Check our project page sangluisme.github.io/TwoSquared/
❤️ @ricmarin.bsky.social @dcremers.bsky.social
April 23, 2025 at 4:48 PM
🤗 I’m excited to share our recent work: TwoSquared: 4D Reconstruction from 2D Image Pairs.
🔥 Our method produces geometry, texture-consistent, and physically plausible 4D reconstructions
📰 Check our project page sangluisme.github.io/TwoSquared/
❤️ @ricmarin.bsky.social @dcremers.bsky.social
🔥 Our method produces geometry, texture-consistent, and physically plausible 4D reconstructions
📰 Check our project page sangluisme.github.io/TwoSquared/
❤️ @ricmarin.bsky.social @dcremers.bsky.social
Reposted by Daniel Cremers
The ICLR 2025 MLMP Best Paper Award, along with 2k GPU-hours from Nebius, goes to "On Incorporating Scale into Graph Networks"! Congratulations, Christian Koke, Yuesong Shen,
@abhi-rf.bsky.social, Marvin Eisenberger, @pseudomanifold.topology.rocks, Michael M. Bronstein, @dcremers.bsky.social!
@abhi-rf.bsky.social, Marvin Eisenberger, @pseudomanifold.topology.rocks, Michael M. Bronstein, @dcremers.bsky.social!
May 5, 2025 at 2:58 PM
The ICLR 2025 MLMP Best Paper Award, along with 2k GPU-hours from Nebius, goes to "On Incorporating Scale into Graph Networks"! Congratulations, Christian Koke, Yuesong Shen,
@abhi-rf.bsky.social, Marvin Eisenberger, @pseudomanifold.topology.rocks, Michael M. Bronstein, @dcremers.bsky.social!
@abhi-rf.bsky.social, Marvin Eisenberger, @pseudomanifold.topology.rocks, Michael M. Bronstein, @dcremers.bsky.social!
Reposted by Daniel Cremers
Can we match vision and language representations without any supervision or paired data?
Surprisingly, yes!
Our #CVPR2025 paper with @neekans.bsky.social and @dcremers.bsky.social shows that the pairwise distances in both modalities are often enough to find correspondences.
⬇️ 1/4
Surprisingly, yes!
Our #CVPR2025 paper with @neekans.bsky.social and @dcremers.bsky.social shows that the pairwise distances in both modalities are often enough to find correspondences.
⬇️ 1/4
June 3, 2025 at 9:27 AM
Can we match vision and language representations without any supervision or paired data?
Surprisingly, yes!
Our #CVPR2025 paper with @neekans.bsky.social and @dcremers.bsky.social shows that the pairwise distances in both modalities are often enough to find correspondences.
⬇️ 1/4
Surprisingly, yes!
Our #CVPR2025 paper with @neekans.bsky.social and @dcremers.bsky.social shows that the pairwise distances in both modalities are often enough to find correspondences.
⬇️ 1/4
Reposted by Daniel Cremers
💡Explore the insights from the ELLIS ML & Computer Vision Workshop (Apr 1-4, 2025) in Bad Teinach 🌲
Leading researchers of the field gathered to explore Vision-Language Models, 3D reconstruction, and links to neuroscience - advancing the future of vision & ML.
👉 Read more: ellis.eu/news/insight...
Leading researchers of the field gathered to explore Vision-Language Models, 3D reconstruction, and links to neuroscience - advancing the future of vision & ML.
👉 Read more: ellis.eu/news/insight...
Insights on the ELLIS Program Workshop on Machine Learning and Computer Vision
The ELLIS mission is to create a diverse European network that promotes research excellence and advances breakthroughs in AI, as well as a pan-European PhD program to educate the next generation of AI...
ellis.eu
May 2, 2025 at 9:59 AM
💡Explore the insights from the ELLIS ML & Computer Vision Workshop (Apr 1-4, 2025) in Bad Teinach 🌲
Leading researchers of the field gathered to explore Vision-Language Models, 3D reconstruction, and links to neuroscience - advancing the future of vision & ML.
👉 Read more: ellis.eu/news/insight...
Leading researchers of the field gathered to explore Vision-Language Models, 3D reconstruction, and links to neuroscience - advancing the future of vision & ML.
👉 Read more: ellis.eu/news/insight...
Reposted by Daniel Cremers
Very glad to announce that our "Finsler Multi-Dimensional Scaling" paper, accepted at #CVPR2025, is now on Arxiv! arxiv.org/abs/2503.18010
We are thrilled to have 12 papers accepted to #CVPR2025. Thanks to all our students and collaborators for this great achievement!
For more details check out cvg.cit.tum.de
For more details check out cvg.cit.tum.de
March 25, 2025 at 7:30 AM
Very glad to announce that our "Finsler Multi-Dimensional Scaling" paper, accepted at #CVPR2025, is now on Arxiv! arxiv.org/abs/2503.18010
Reposted by Daniel Cremers
Check out our latest recent #CVPR2025 paper AnyCam, a fast method for pose estimation in casual videos!
1️⃣ Can be directly trained on casual videos without the need for 3D annotation.
2️⃣ Based around a feed-forward transformer and light-weight refinement.
Code and more info: ⏩ fwmb.github.io/anycam/
1️⃣ Can be directly trained on casual videos without the need for 3D annotation.
2️⃣ Based around a feed-forward transformer and light-weight refinement.
Code and more info: ⏩ fwmb.github.io/anycam/
April 23, 2025 at 3:52 PM
Check out our latest recent #CVPR2025 paper AnyCam, a fast method for pose estimation in casual videos!
1️⃣ Can be directly trained on casual videos without the need for 3D annotation.
2️⃣ Based around a feed-forward transformer and light-weight refinement.
Code and more info: ⏩ fwmb.github.io/anycam/
1️⃣ Can be directly trained on casual videos without the need for 3D annotation.
2️⃣ Based around a feed-forward transformer and light-weight refinement.
Code and more info: ⏩ fwmb.github.io/anycam/
Reposted by Daniel Cremers
AnyCam: Learning to Recover Camera Poses and Intrinsics from Casual Videos
@fwimbauer.bsky.social, Weirong Chen, Dominik Muhle, Christian Rupprecht, @dcremers.bsky.social
tl;dr: uncertaintybased loss+pre-trained depth and flow networks+test-time trajectory refinement
arxiv.org/abs/2503.23282
@fwimbauer.bsky.social, Weirong Chen, Dominik Muhle, Christian Rupprecht, @dcremers.bsky.social
tl;dr: uncertaintybased loss+pre-trained depth and flow networks+test-time trajectory refinement
arxiv.org/abs/2503.23282
April 2, 2025 at 11:50 AM
AnyCam: Learning to Recover Camera Poses and Intrinsics from Casual Videos
@fwimbauer.bsky.social, Weirong Chen, Dominik Muhle, Christian Rupprecht, @dcremers.bsky.social
tl;dr: uncertaintybased loss+pre-trained depth and flow networks+test-time trajectory refinement
arxiv.org/abs/2503.23282
@fwimbauer.bsky.social, Weirong Chen, Dominik Muhle, Christian Rupprecht, @dcremers.bsky.social
tl;dr: uncertaintybased loss+pre-trained depth and flow networks+test-time trajectory refinement
arxiv.org/abs/2503.23282
Reposted by Daniel Cremers
Work by: @olvrhhn.bsky.social *, @christophreich.bsky.social *, @neekans.bsky.social, @dcremers.bsky.social, Christian Rupprecht, and @stefanroth.bsky.social
Paper: arxiv.org/abs/2504.01955
Project Page: visinf.github.io/cups
Code: github.com/visinf/cups
MCML Blogpost: mcml.ai/news/2025-04...
Paper: arxiv.org/abs/2504.01955
Project Page: visinf.github.io/cups
Code: github.com/visinf/cups
MCML Blogpost: mcml.ai/news/2025-04...
April 4, 2025 at 1:38 PM
Work by: @olvrhhn.bsky.social *, @christophreich.bsky.social *, @neekans.bsky.social, @dcremers.bsky.social, Christian Rupprecht, and @stefanroth.bsky.social
Paper: arxiv.org/abs/2504.01955
Project Page: visinf.github.io/cups
Code: github.com/visinf/cups
MCML Blogpost: mcml.ai/news/2025-04...
Paper: arxiv.org/abs/2504.01955
Project Page: visinf.github.io/cups
Code: github.com/visinf/cups
MCML Blogpost: mcml.ai/news/2025-04...
Reposted by Daniel Cremers
Back on Track: Bundle Adjustment for Dynamic Scene Reconstruction
Weirong Chen, @ganlinzhang.xyz, @fwimbauer.bsky.social, Rui Wang, @neekans.bsky.social, Andrea Vedaldi, @dcremers.bsky.social
tl;dr: learning-based 3D point tracker decouples camera and object-based motion
arxiv.org/abs/2504.14516
Weirong Chen, @ganlinzhang.xyz, @fwimbauer.bsky.social, Rui Wang, @neekans.bsky.social, Andrea Vedaldi, @dcremers.bsky.social
tl;dr: learning-based 3D point tracker decouples camera and object-based motion
arxiv.org/abs/2504.14516
April 23, 2025 at 5:13 PM
Back on Track: Bundle Adjustment for Dynamic Scene Reconstruction
Weirong Chen, @ganlinzhang.xyz, @fwimbauer.bsky.social, Rui Wang, @neekans.bsky.social, Andrea Vedaldi, @dcremers.bsky.social
tl;dr: learning-based 3D point tracker decouples camera and object-based motion
arxiv.org/abs/2504.14516
Weirong Chen, @ganlinzhang.xyz, @fwimbauer.bsky.social, Rui Wang, @neekans.bsky.social, Andrea Vedaldi, @dcremers.bsky.social
tl;dr: learning-based 3D point tracker decouples camera and object-based motion
arxiv.org/abs/2504.14516
Reposted by Daniel Cremers
PRaDA: Projective Radial Distortion Averaging
Daniil Sinitsyn, @linushn.bsky.social, @dcremers.bsky.social
tl;dr: operate in Projective space->geometry is unique up to a homography
arxiv.org/abs/2504.16499
Daniil Sinitsyn, @linushn.bsky.social, @dcremers.bsky.social
tl;dr: operate in Projective space->geometry is unique up to a homography
arxiv.org/abs/2504.16499
April 24, 2025 at 1:03 PM
PRaDA: Projective Radial Distortion Averaging
Daniil Sinitsyn, @linushn.bsky.social, @dcremers.bsky.social
tl;dr: operate in Projective space->geometry is unique up to a homography
arxiv.org/abs/2504.16499
Daniil Sinitsyn, @linushn.bsky.social, @dcremers.bsky.social
tl;dr: operate in Projective space->geometry is unique up to a homography
arxiv.org/abs/2504.16499
We are thrilled to have 12 papers accepted to #CVPR2025. Thanks to all our students and collaborators for this great achievement!
For more details check out cvg.cit.tum.de
For more details check out cvg.cit.tum.de
March 13, 2025 at 1:11 PM
We are thrilled to have 12 papers accepted to #CVPR2025. Thanks to all our students and collaborators for this great achievement!
For more details check out cvg.cit.tum.de
For more details check out cvg.cit.tum.de
Reposted by Daniel Cremers
🚀 Excited to kick off our @iclr-conf.bsky.social 2025 Machine Learning Multiscale Processes Workshop contributed paper series! 🥳
📝 On the Successful Incorporation of Scale into Graph Neural Networks
📅 Join us on April 27 at #ICLR2025!
⏳ Early reg. deadline: March 15
#AI #ML #ICLR
📝 On the Successful Incorporation of Scale into Graph Neural Networks
📅 Join us on April 27 at #ICLR2025!
⏳ Early reg. deadline: March 15
#AI #ML #ICLR
March 8, 2025 at 6:46 AM
🚀 Excited to kick off our @iclr-conf.bsky.social 2025 Machine Learning Multiscale Processes Workshop contributed paper series! 🥳
📝 On the Successful Incorporation of Scale into Graph Neural Networks
📅 Join us on April 27 at #ICLR2025!
⏳ Early reg. deadline: March 15
#AI #ML #ICLR
📝 On the Successful Incorporation of Scale into Graph Neural Networks
📅 Join us on April 27 at #ICLR2025!
⏳ Early reg. deadline: March 15
#AI #ML #ICLR
Reposted by Daniel Cremers
🥳 Thrilled to announce that our work, "4Deform: Neural Surface Deformation for Robust Shape Interpolation," has been accepted to #CVPR2025 🙌
💻 Check our project page: 4deform.github.io
👏 Great thanks to my amazing co-authors. @ricmarin.bsky.social @dongliangcao.bsky.social @dcremers.bsky.social
💻 Check our project page: 4deform.github.io
👏 Great thanks to my amazing co-authors. @ricmarin.bsky.social @dongliangcao.bsky.social @dcremers.bsky.social
March 3, 2025 at 6:18 PM
🥳 Thrilled to announce that our work, "4Deform: Neural Surface Deformation for Robust Shape Interpolation," has been accepted to #CVPR2025 🙌
💻 Check our project page: 4deform.github.io
👏 Great thanks to my amazing co-authors. @ricmarin.bsky.social @dongliangcao.bsky.social @dcremers.bsky.social
💻 Check our project page: 4deform.github.io
👏 Great thanks to my amazing co-authors. @ricmarin.bsky.social @dongliangcao.bsky.social @dcremers.bsky.social
Exciting discussions on the future of AI at the Paris AI Action Summit with French Minister of Science Philippe Baptiste and many leading AI researchers
February 7, 2025 at 5:21 PM
Exciting discussions on the future of AI at the Paris AI Action Summit with French Minister of Science Philippe Baptiste and many leading AI researchers
Reposted by Daniel Cremers
Including commentary from @claireve.bsky.social, @dcremers.bsky.social, Yair Weiss, @gulvarol.bsky.social, Bernhard Schölkopf, and @lawrennd.bsky.social
January 27, 2025 at 9:51 AM
Including commentary from @claireve.bsky.social, @dcremers.bsky.social, Yair Weiss, @gulvarol.bsky.social, Bernhard Schölkopf, and @lawrennd.bsky.social
Reposted by Daniel Cremers
🥳Thrilled to share our work, "Implicit Neural Surface Deformation with Explicit Velocity Fields", accepted at #ICLR2025 👏
code is available at: github.com/Sangluisme/I...
😊Huge thanks to my amazing co-authors. @dongliangcao.bsky.social @dcremers.bsky.social
👏Special thanks to @ricmarin.bsky.social
code is available at: github.com/Sangluisme/I...
😊Huge thanks to my amazing co-authors. @dongliangcao.bsky.social @dcremers.bsky.social
👏Special thanks to @ricmarin.bsky.social
January 23, 2025 at 5:22 PM
🥳Thrilled to share our work, "Implicit Neural Surface Deformation with Explicit Velocity Fields", accepted at #ICLR2025 👏
code is available at: github.com/Sangluisme/I...
😊Huge thanks to my amazing co-authors. @dongliangcao.bsky.social @dcremers.bsky.social
👏Special thanks to @ricmarin.bsky.social
code is available at: github.com/Sangluisme/I...
😊Huge thanks to my amazing co-authors. @dongliangcao.bsky.social @dcremers.bsky.social
👏Special thanks to @ricmarin.bsky.social
Indeed - everyone had a blast - thank you all for the great talks, discussions and Ski/snowboarding!
This week we had our winter retreat jointly with Daniel Cremer's group in Montafon, Austria. 46 talks, 100 Km of slopes and night sledding with some occasionally lost and found. It has been fun!
January 16, 2025 at 5:56 PM
Indeed - everyone had a blast - thank you all for the great talks, discussions and Ski/snowboarding!