More: olvrhhn.github.io
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
🌎 visinf.github.io/cups
www.career.tu-darmstadt.de/tu-darmstadt...
www.career.tu-darmstadt.de/tu-darmstadt...
Come by our SceneDINO poster at NeuSLAM today 14:15 (Kamehameha II) or Tue, 15:15 (Ex. Hall I 627)!
W/ Jevtić @fwimbauer.bsky.social @olvrhhn.bsky.social Rupprecht, @stefanroth.bsky.social @dcremers.bsky.social
Come by our SceneDINO poster at NeuSLAM today 14:15 (Kamehameha II) or Tue, 15:15 (Ex. Hall I 627)!
W/ Jevtić @fwimbauer.bsky.social @olvrhhn.bsky.social Rupprecht, @stefanroth.bsky.social @dcremers.bsky.social
Work by Jannik Endres, @olvrhhn.bsky.social, Charles Cobière, @simoneschaub.bsky.social, @stefanroth.bsky.social and Alexandre Alahi.
Work by Jannik Endres, @olvrhhn.bsky.social, Charles Cobière, @simoneschaub.bsky.social, @stefanroth.bsky.social and Alexandre Alahi.
@ellis.eu @tuda.bsky.social
ellis.eu/news/ellis-p...
@ellis.eu @tuda.bsky.social
ellis.eu/news/ellis-p...
Aleksandar Jevtić, Christoph Reich, Felix Wimbauer ... Daniel Cremers
arxiv.org/abs/2507.06230
Trending on www.scholar-inbox.com
Aleksandar Jevtić, Christoph Reich, Felix Wimbauer ... Daniel Cremers
arxiv.org/abs/2507.06230
Trending on www.scholar-inbox.com
🌍: visinf.github.io/scenedino/
📃: arxiv.org/abs/2507.06230
🤗: huggingface.co/spaces/jev-a...
@jev-aleks.bsky.social @fwimbauer.bsky.social @olvrhhn.bsky.social @stefanroth.bsky.social @dcremers.bsky.social
🌍: visinf.github.io/scenedino/
📃: arxiv.org/abs/2507.06230
🤗: huggingface.co/spaces/jev-a...
@jev-aleks.bsky.social @fwimbauer.bsky.social @olvrhhn.bsky.social @stefanroth.bsky.social @dcremers.bsky.social
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
by @olvrhhn.bsky.social , @christophreich.bsky.social , @neekans.bsky.social , @dcremers.bsky.social, Christian Rupprecht, and @stefanroth.bsky.social
Sunday, 8:30 AM, ExHall D, Poster 330
Project Page: visinf.github.io/cups
by @olvrhhn.bsky.social , @christophreich.bsky.social , @neekans.bsky.social , @dcremers.bsky.social, Christian Rupprecht, and @stefanroth.bsky.social
Sunday, 8:30 AM, ExHall D, Poster 330
Project Page: visinf.github.io/cups
Turns out you can!
In our #CVPR2025 paper AnyCam, we directly train on YouTube videos and achieve SOTA results by using an uncertainty-based flow loss and monocular priors!
⬇️
Turns out you can!
In our #CVPR2025 paper AnyCam, we directly train on YouTube videos and achieve SOTA results by using an uncertainty-based flow loss and monocular priors!
⬇️
cvpr.thecvf.com/Conferences/...
We release new descriptions for 1.9M(!) videos and object-debiased splits for 12 datasets!
🔗Project: utd-project.github.io
by @ninashv.bsky.social et al 🧵👇
@cvprconference.bsky.social
We release new descriptions for 1.9M(!) videos and object-debiased splits for 12 datasets!
🔗Project: utd-project.github.io
by @ninashv.bsky.social et al 🧵👇
@cvprconference.bsky.social
1️⃣ Can be directly trained on casual videos without the need for 3D annotation.
2️⃣ Based around a feed-forward transformer and light-weight refinement.
Code and more info: ⏩ fwmb.github.io/anycam/
1️⃣ Can be directly trained on casual videos without the need for 3D annotation.
2️⃣ Based around a feed-forward transformer and light-weight refinement.
Code and more info: ⏩ fwmb.github.io/anycam/
📬 Scholar Inbox is your personal assistant for staying up to date with your literature. It includes: visual summaries, collections, search and a conference planner.
Check out our white paper: arxiv.org/abs/2504.08385
#OpenScience #AI #RecommenderSystems
📬 Scholar Inbox is your personal assistant for staying up to date with your literature. It includes: visual summaries, collections, search and a conference planner.
Check out our white paper: arxiv.org/abs/2504.08385
#OpenScience #AI #RecommenderSystems
We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!
🌎 visinf.github.io/cups
For more details check out cvg.cit.tum.de
For more details check out cvg.cit.tum.de
visinf.github.io/primaps/
PriMaPs generate masks from self-supervised features, enabling to boost unsupervised semantic segmentation via stochastic EM.
visinf.github.io/primaps/
PriMaPs generate masks from self-supervised features, enabling to boost unsupervised semantic segmentation via stochastic EM.