Vision-based Assistants in the Real-World
banner
varcvpr2025.bsky.social
Vision-based Assistants in the Real-World
@varcvpr2025.bsky.social
Workshop @ CVPR 2025 [https://varworkshop.github.io/]
Call for Participation @cvprconference.bsky.social: Multi-Modal LLMs - prepare to engage in a dynamic, face-to-face conversation with a real human user!

Details: varworkshop.github.io/challenges/

🚨🚨🚨The winning teams will receive a prize and a contributed talk.

P.S. GPT-4o does not do too well.
March 26, 2025 at 4:38 AM
Call for Participation: We're excited to announce a challenge focused on developing AI assistants that can guide users through workout sessions with intelligent feedback!

🚨The winning teams will receive a prize along with a contributed talk. 🚨

Website: varworkshop.github.io/challenges/
March 10, 2025 at 6:47 PM
Call for Papers and Demos @cvprconference.bsky.social: on topics such as streaming vision-language models, real-time activity understanding, grounding, ego-centric video understanding, language and robot learning. Contributions are encouraged to include a demo!

Link: varworkshop.github.io/calls/
March 3, 2025 at 8:30 PM
Join us at the CVPR 2025 Workshop on Vision-based Assistants in the Real-world (VAR) and tackle one of AI's biggest challenges: building systems that can comprehend and reason about dynamic, real-world scenes.

Workshop Page: varworkshop.github.io
February 27, 2025 at 5:36 AM