#CVPR2025!🚀
🚀 The first day of conversations at #CVPR2025 is going strong!

✅ Robotics: human interaction simulation
✅ AV: building better pedestrian models for ADAS
✅ AI labs: scalable synthetic body data
✅ Healthcare: shape and pose tracking for remote monitoring

These are real production needs: not theory!
June 13, 2025 at 9:31 PM
🦸‍♀️ How to build an Anything Model?

The author of SAM and SAM-2, Nikhila Ravi, is one of our amazing keynote speakers this morning at #WiCV
@cvprconference.bsky.social

During her talk, she shared what’s next! 🚀
#CVPR2025 #WomenInCV
June 12, 2025 at 4:48 PM
Thread: Workshop Papers from Our Lab at CVPR 2025! 🚀

👏 Huge congrats to our members on these workshop paper acceptances! Excited to see their work at #CVPR2025 🌟

#MPI-INF #D2 #Workshop #AI #ComputerVision #PhD
@mpi-inf.mpg.de
June 11, 2025 at 8:44 PM
Thread: Main Conference Papers from Our Lab at CVPR 2025! 🚀

👏 Big congrats to everyone! Keep an eye out at #CVPR2025 🌟

#MPI-INF #D2 #ComputerVision #AI #PhD #ML

@mpi-inf.mpg.de
June 11, 2025 at 8:20 PM
🚀 HERE WE GO! Join us at CVPR 2025 for a full-day tutorial: “Robotics 101: An Odyssey from a Vision Perspective”
🗓️ June 12 • 📍 Room 202B, Nashville

Meet our incredible lineup of speakers covering topics from agile robotics to safe physical AI at: opendrivelab.com/cvpr2025/tut...

#cvpr2025
June 10, 2025 at 12:29 AM
🚀 Going to #CVPR2025?
Want to create 3D garments faster and at scale?

Catch Maria Korosteleva at the Virtual Try-On Workshop on June 12, 2:20PM CDT (Room 105B) to learn how synthetic data is changing the game!

🔗 vto-at-cvpr25.github.io

#SMPL #AI #3DGarment #SyntheticData
June 6, 2025 at 1:32 PM
🚀 As #CVPR2025 week kicks off, meet SANSA: Semantically AligNed Segment Anything 2
We turn SAM2 into a semantic few-shot segmenter:
🧠 Unlocks latent semantics in frozen SAM2
✏️ Supports any prompt: fast and scalable annotation
📦 No extra encoders

📎 github.com/ClaudiaCutta...
June 2, 2025 at 6:08 PM
🚀 Excited to announce our #CVPR2025 paper: CAV-MAE Sync: Improving Contrastive Audio-Visual Mask Autoencoders via Fine-Grained Alignment!
We introduce a simple yet effective method for improved audio-visual learning.
🔗 Project: edsonroteia.github.io/cav-mae-sync/
🧵 (1/7)👇
CAV-MAE Sync: Improving Contrastive Audio-Visual Mask Autoencoders via Fine-Grained Alignment
CAV-MAE Sync: Improving Contrastive Audio-Visual Mask Autoencoders via Fine-Grained Alignment
edsonroteia.github.io
May 22, 2025 at 1:46 PM
1/ 🚀 𝗡𝗲𝘄 𝗣𝘂𝗯𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝗔𝗹𝗲𝗿𝘁: 𝗕𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸𝗶𝗻𝗴 𝗢𝗯𝗷𝗲𝗰𝘁 𝗗𝗲𝘁𝗲𝗰𝘁𝗼𝗿𝘀 𝘂𝗻𝗱𝗲𝗿 𝗥𝗲𝗮𝗹-𝗪𝗼𝗿𝗹𝗱 𝗗𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗼𝗻 𝗦𝗵𝗶𝗳𝘁𝘀 𝗶𝗻 𝗦𝗮𝘁𝗲𝗹𝗹𝗶𝘁𝗲 𝗜𝗺𝗮𝗴𝗲𝗿𝘆! 🌍🛰️
We’re excited to announce that our paper "𝗕𝗲𝗻𝗰𝗵𝗺𝗮𝗿𝗸𝗶𝗻𝗴 𝗢𝗯𝗷𝗲𝗰𝘁 𝗗𝗲𝘁𝗲𝗰𝘁𝗼𝗿𝘀 𝘂𝗻𝗱𝗲𝗿 𝗥𝗲𝗮𝗹-𝗪𝗼𝗿𝗹𝗱 𝗗𝗶𝘀𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗼𝗻 𝗦𝗵𝗶𝗳𝘁𝘀 𝗶𝗻 𝗦𝗮𝘁𝗲𝗹𝗹𝗶𝘁𝗲 𝗜𝗺𝗮𝗴𝗲𝗿𝘆" has been 𝗮𝗰𝗰𝗲𝗽𝘁𝗲𝗱 𝗮𝘁 #CVPR2025! 🎉
A 🧵
May 15, 2025 at 4:37 PM
Yay, @cvprconference.bsky.social, we’re in! 🎉#CVPR2025

5 papers accepted, 5 going live 🚀

Catch PromptHMR, DiffLocks, ChatHuman, ChatGarment & PICO at Booth 1333, June 11–15.

Details about the papers in the thread! 👇

#3DBody #SMPL #GenerativeAI #MachineLearning
May 15, 2025 at 12:49 PM
Have an exhibitor booth at #CVPR2025? Tag us with @cvprconference.bsky.social on your posts so we can give them a BOOST 🚀
a person is pressing a button that says turbo boost on it
ALT: a person is pressing a button that says turbo boost on it
media.tenor.com
May 13, 2025 at 1:54 PM
🚀 The #WiCV at @cvprconference.bsky.social 2025 organizing team is in action! 🎉✨

We are working hard to bring you an amazing edition of this event, dedicated to increasing the visibility of female researchers in computer vision.
Stay tuned for interesting updates!💜

#WiCV #CVPR2025 #WomenInCV
April 16, 2025 at 11:23 AM
Check out our latest #CVPR2025 work on masked scene modeling, which learns semantically relevant features by pre-training natively in 3D! 🚀
Are you tired of projecting 2D features into 3D for general problem solving? Try our self-supervised model, which generates semantic off-the-shelf features natively in 3D! #CVPR2025

Project: phermosilla.github.io/msm/
Arxiv: arxiv.org/abs/2504.06719
Github: github.com/phermosilla/...
April 10, 2025 at 8:50 AM
Check out our recent #CVPR2025 #highlight paper on unsupervised panoptic segmentation🚀
🌍 visinf.github.io/cups/
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥

We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!

🌎 visinf.github.io/cups
April 4, 2025 at 1:45 PM
🚀How to preserve object consistency in NVS, ensuring correct position, orientation, plausible geometry, and appearance? This is especially critical for image/video generative models and world models.

🎉Check out our #CVPR2025 paper: MOVIS (jason-aplp.github.io/MOVIS) 👇 (1/6)
April 1, 2025 at 1:42 AM
Excited to announce the 1st Workshop on 3D-LLM/VLA at #CVPR2025! 🚀 @cvprconference.bsky.social

Topics: 3D-VLA models, LLM agents for 3D scene understanding, Robotic control with language.

📢 Call for papers: Deadline – April 20, 2025

🌐 Details: 3d-llm-vla.github.io

#llm #3d #Robotics #ai
March 23, 2025 at 9:35 PM
🚀 How to reconstruct 3D scenes with decomposed objects from sparse inputs?

Check out DPRecon (dp-recon.github.io) at #CVPR2025 — it recovers all objects, achieves photorealistic mesh rendering, and supports text-based geometry & appearance editing. More details👇 (1/n)
March 21, 2025 at 9:48 AM
🚨🚀 Call for Papers – CVPR Workshop on Domain Generalization: Evolution, Breakthroughs and Future Horizon @cvprconference.bsky.social ! #cvpr2025 🚀

🌐 cvpr25workshop.m-haris-khan.com
📝 Submission: cmt3.research.microsoft.com/DGEBF2025/ (incl. CVPR accepts)
📅 Deadline: 27 Mar 2025
DG-EBF – Domain Generalization: Evolution, Breakthroughs and Future Horizon
cvpr25workshop.m-haris-khan.com
March 18, 2025 at 7:24 AM
Exciting research from NAVER LABS Europe
@cvpr.bsky.social #CVPR2025 🚀! A breakdown of the 7 papers covering visual navigation, distillation, 3D reconstruction (remember #DUSt3R ☺️), localization & motion segmentation. europe.naverlabs.com/updates/cvpr/ 🧵 1/9 ⬇️
March 17, 2025 at 7:50 PM
Check out the project page 📜. Code will be released soon!
nianticlabs.github.io/morpheus/

Work by Jamie Wynn, Zawar Qureshi, Jakub Powierza, Jamie Watson, and myself.

🚀See you at #CVPR2025!🚀

(5/6)
March 17, 2025 at 2:51 PM
🚀 The #CVPR2025 #EarthVision Data Challenge has already started❗

Organized by #Embed2Scale, this challenge pushes the limits of lossy neural compression for geospatial analytics.

🔗 Join the challenge now: eval.ai/web/challeng...

euspa.bsky.social
March 17, 2025 at 8:22 AM
🚀 Diversity in AI matters! Excited to be part of the VLMs-4-All @CVPR 2025 Workshop, where we push for geo-diverse & culturally aware Vision-Language Models! Submit your papers now, join the challenges & engage in vital discussions. 🌍🔥 #CVPR2025 #VLMs4All2025
📢Excited to announce our upcoming workshop - Vision Language Models For All: Building Geo-Diverse and Culturally Aware Vision-Language Models (VLMs-4-All) @CVPR 2025!
🌐 sites.google.com/view/vlms4all
March 14, 2025 at 7:51 PM
Check out the recent CVG papers at #CVPR2025, including our (@olvrhhn.bsky.social, @neekans.bsky.social, @dcremers.bsky.social, Christian Rupprecht, and @stefanroth.bsky.social) work on unsupervised panoptic segmentation. The paper will soon be available on arXiv. 🚀
We are thrilled to have 12 papers accepted to #CVPR2025. Thanks to all our students and collaborators for this great achievement!
For more details check out cvg.cit.tum.de
March 13, 2025 at 3:49 PM
Thrilled to announce that our fourth and last invited speaker for MULA 2025 in Nashville is Katerina Fragkiadaki, Associate Professor at CMU! 🚀
Don’t miss her talk at the workshop. Looking forward to seeing you all there! #CVPR2025
Katerina Fragkiadaki
Katerina Fragkiadaki email: katef 'at' cs.cmu.edu Please enable Javascript to view emailScramble = new scrambledString(document.getElementById('email'), 'emailScramble', 'umepc@aah.tud.skdc',…
www.cs.cmu.edu
March 13, 2025 at 9:56 AM
🚀Call for #CVPR2025 workshop papers 📝

Present your latest work on Open-World 3D Scene Understanding with Foundation Models🌍☀️

📆Paper Deadline: March 25
(4 pages abstract, or 8 pages paper)

🔗OpenReview: openreview.net/group?id=thecvf.com/CVPR/2025/Workshop/OpenSUN3D

🏡Details: opensun3d.github.io
March 3, 2025 at 7:34 PM