https://shenhanqian.github.io/
@ganlinzhang.xyz, @shenhanqian.bsky.social, @xiwang1212.bsky.social, @dcremers.bsky.social
arxiv.org/abs/2509.01584
@ganlinzhang.xyz, @shenhanqian.bsky.social, @xiwang1212.bsky.social, @dcremers.bsky.social
arxiv.org/abs/2509.01584
Surprisingly, yes!
Our #CVPR2025 paper with @neekans.bsky.social and @dcremers.bsky.social shows that the pairwise distances in both modalities are often enough to find correspondences.
⬇️ 1/4
Surprisingly, yes!
Our #CVPR2025 paper with @neekans.bsky.social and @dcremers.bsky.social shows that the pairwise distances in both modalities are often enough to find correspondences.
⬇️ 1/4
Turns out you can!
In our #CVPR2025 paper AnyCam, we directly train on YouTube videos and achieve SOTA results by using an uncertainty-based flow loss and monocular priors!
⬇️
Turns out you can!
In our #CVPR2025 paper AnyCam, we directly train on YouTube videos and achieve SOTA results by using an uncertainty-based flow loss and monocular priors!
⬇️
Weirong Chen, @ganlinzhang.xyz, @fwimbauer.bsky.social, Rui Wang, @neekans.bsky.social, Andrea Vedaldi, @dcremers.bsky.social
tl;dr: learning-based 3D point tracker decouples camera and object-based motion
arxiv.org/abs/2504.14516
Weirong Chen, @ganlinzhang.xyz, @fwimbauer.bsky.social, Rui Wang, @neekans.bsky.social, Andrea Vedaldi, @dcremers.bsky.social
tl;dr: learning-based 3D point tracker decouples camera and object-based motion
arxiv.org/abs/2504.14516
For more details check out cvg.cit.tum.de
For more details check out cvg.cit.tum.de
code is available at: github.com/Sangluisme/I...
😊Huge thanks to my amazing co-authors. @dongliangcao.bsky.social @dcremers.bsky.social
👏Special thanks to @ricmarin.bsky.social
code is available at: github.com/Sangluisme/I...
😊Huge thanks to my amazing co-authors. @dongliangcao.bsky.social @dcremers.bsky.social
👏Special thanks to @ricmarin.bsky.social
Amazing talks, activities, and people! Love it!
Amazing talks, activities, and people! Love it!