Paper submission is open! If you're eager to push the boundaries of multimodal AI, then BEAM 2025 (co-located with #CVPR2025 in Nashville, TN) is the event for you! Co-organized by Amazon & @scsatcmu.bsky.social . #AI #MultimodalAI #MachineLearning #CallForPapers
February 18, 2025 at 7:43 PM
Paper submission is open! If you're eager to push the boundaries of multimodal AI, then BEAM 2025 (co-located with #CVPR2025 in Nashville, TN) is the event for you! Co-organized by Amazon & @scsatcmu.bsky.social . #AI #MultimodalAI #MachineLearning #CallForPapers
What is more stressful?
#CVPR2025 @cvprconference.bsky.social rebuttals or a serious game of Jenga?
We were testing this hypothesis over a post-reviews pizza and games evening...
Remember to take time off...
#CVPR2025 @cvprconference.bsky.social rebuttals or a serious game of Jenga?
We were testing this hypothesis over a post-reviews pizza and games evening...
Remember to take time off...
January 23, 2025 at 8:30 PM
What is more stressful?
#CVPR2025 @cvprconference.bsky.social rebuttals or a serious game of Jenga?
We were testing this hypothesis over a post-reviews pizza and games evening...
Remember to take time off...
#CVPR2025 @cvprconference.bsky.social rebuttals or a serious game of Jenga?
We were testing this hypothesis over a post-reviews pizza and games evening...
Remember to take time off...
#CVPR2025 Fri June 13 PM
🤟 Lost in Translation, Found in Context: Sign Language Translation with Contextual Cues
Youngjoon Jang, Haran Raajesh, Liliane Momeni @gulvarol.bsky.social Andrew Zisserman @oxford-vgg.bsky.social
📄pdf: arxiv.org/abs/2501.09754
🌐webpage: www.robots.ox.ac.uk/~vgg/researc...
🤟 Lost in Translation, Found in Context: Sign Language Translation with Contextual Cues
Youngjoon Jang, Haran Raajesh, Liliane Momeni @gulvarol.bsky.social Andrew Zisserman @oxford-vgg.bsky.social
📄pdf: arxiv.org/abs/2501.09754
🌐webpage: www.robots.ox.ac.uk/~vgg/researc...
Lost in Translation, Found in Context: Sign Language Translation with Contextual Cues
Learn about our sign language translation method which incorporates context allowing it to generate more complete and meaningful translations.
www.robots.ox.ac.uk
April 30, 2025 at 1:04 PM
#CVPR2025 Fri June 13 PM
🤟 Lost in Translation, Found in Context: Sign Language Translation with Contextual Cues
Youngjoon Jang, Haran Raajesh, Liliane Momeni @gulvarol.bsky.social Andrew Zisserman @oxford-vgg.bsky.social
📄pdf: arxiv.org/abs/2501.09754
🌐webpage: www.robots.ox.ac.uk/~vgg/researc...
🤟 Lost in Translation, Found in Context: Sign Language Translation with Contextual Cues
Youngjoon Jang, Haran Raajesh, Liliane Momeni @gulvarol.bsky.social Andrew Zisserman @oxford-vgg.bsky.social
📄pdf: arxiv.org/abs/2501.09754
🌐webpage: www.robots.ox.ac.uk/~vgg/researc...
Honored to be selected among the Outstanding Reviewers at #CVPR2025.
#UAntwerp @sqirllab.bsky.social #IDLab
cvpr.thecvf.com/Conferences/...
#UAntwerp @sqirllab.bsky.social #IDLab
cvpr.thecvf.com/Conferences/...
2025 Progam Committee
cvpr.thecvf.com
May 13, 2025 at 11:54 AM
Honored to be selected among the Outstanding Reviewers at #CVPR2025.
#UAntwerp @sqirllab.bsky.social #IDLab
cvpr.thecvf.com/Conferences/...
#UAntwerp @sqirllab.bsky.social #IDLab
cvpr.thecvf.com/Conferences/...
I will be at #CVPR2025 to present this work (RUBIK: A Structured Benchmark for Image Matching across Geometric Challenges) at 4pm, poster #88.
Come if you want to discuss!
Come if you want to discuss!
June 15, 2025 at 8:28 PM
I will be at #CVPR2025 to present this work (RUBIK: A Structured Benchmark for Image Matching across Geometric Challenges) at 4pm, poster #88.
Come if you want to discuss!
Come if you want to discuss!
Well done to AIML’s Prof Simon Lucey, Prof Anton van den Hengel, and Dr Hemanth Saratchandran, who represented AIML at prominent events and meetups across the US and Canada: speaking at CVPR2025, hosting “Aussies in AI,” visiting top tech labs, and joining panels on generative and foundational AI.
July 2, 2025 at 2:09 AM
Well done to AIML’s Prof Simon Lucey, Prof Anton van den Hengel, and Dr Hemanth Saratchandran, who represented AIML at prominent events and meetups across the US and Canada: speaking at CVPR2025, hosting “Aussies in AI,” visiting top tech labs, and joining panels on generative and foundational AI.
Danier Duolikun presents our work on pre-trained visual representations for visuomotor robot learning today at #CVPR2025 in the 6th Embodied AI Workshop!
🗣️ Talk: 15:30, Room 101 D
📌 Poster: 12:00–13:30, ExHall D (#140–169)
Come say hi!
More info here: tsagkas.github.io/pvrobo/
🗣️ Talk: 15:30, Room 101 D
📌 Poster: 12:00–13:30, ExHall D (#140–169)
Come say hi!
More info here: tsagkas.github.io/pvrobo/
June 12, 2025 at 1:09 PM
Danier Duolikun presents our work on pre-trained visual representations for visuomotor robot learning today at #CVPR2025 in the 6th Embodied AI Workshop!
🗣️ Talk: 15:30, Room 101 D
📌 Poster: 12:00–13:30, ExHall D (#140–169)
Come say hi!
More info here: tsagkas.github.io/pvrobo/
🗣️ Talk: 15:30, Room 101 D
📌 Poster: 12:00–13:30, ExHall D (#140–169)
Come say hi!
More info here: tsagkas.github.io/pvrobo/
Honorable Mentions
Congratulations to the #CVPR2025 Honorable Mentions for Best Paper!
@GoogleDeepMind, @UCBerkeley, @UMich, @AIatMeta, @nyuniversity, @berkeley_ai, #AllenInstituteforAI, @UW, #UniversityCollegeLondon, @UniversityLeeds, @ZJU_China, @NTUsg, @PKU1898, @Huawei Singapore Research Center
Congratulations to the #CVPR2025 Honorable Mentions for Best Paper!
@GoogleDeepMind, @UCBerkeley, @UMich, @AIatMeta, @nyuniversity, @berkeley_ai, #AllenInstituteforAI, @UW, #UniversityCollegeLondon, @UniversityLeeds, @ZJU_China, @NTUsg, @PKU1898, @Huawei Singapore Research Center
June 14, 2025 at 1:45 AM
Honorable Mentions
Congratulations to the #CVPR2025 Honorable Mentions for Best Paper!
@GoogleDeepMind, @UCBerkeley, @UMich, @AIatMeta, @nyuniversity, @berkeley_ai, #AllenInstituteforAI, @UW, #UniversityCollegeLondon, @UniversityLeeds, @ZJU_China, @NTUsg, @PKU1898, @Huawei Singapore Research Center
Congratulations to the #CVPR2025 Honorable Mentions for Best Paper!
@GoogleDeepMind, @UCBerkeley, @UMich, @AIatMeta, @nyuniversity, @berkeley_ai, #AllenInstituteforAI, @UW, #UniversityCollegeLondon, @UniversityLeeds, @ZJU_China, @NTUsg, @PKU1898, @Huawei Singapore Research Center
A few hours to recover from #CVPR2025 rebuttals...
then the 4th MDEC starts!
Dev phase opens tomorrow, with some nice updates!
then the 4th MDEC starts!
Dev phase opens tomorrow, with some nice updates!
Update about the 4th Monocular Depth Estimation Workshop at #CVPR2025:
🎉 Website is LIVE: jspenmar.github.io/MDEC/
🎉 Keynotes: Peter Wonka, Yiyi Liao, and Konrad Schindler
🎉 Challenge updates: new prediction types, baselines & metrics
🎉 Website is LIVE: jspenmar.github.io/MDEC/
🎉 Keynotes: Peter Wonka, Yiyi Liao, and Konrad Schindler
🎉 Challenge updates: new prediction types, baselines & metrics
January 31, 2025 at 7:35 PM
A few hours to recover from #CVPR2025 rebuttals...
then the 4th MDEC starts!
Dev phase opens tomorrow, with some nice updates!
then the 4th MDEC starts!
Dev phase opens tomorrow, with some nice updates!
March 2, 2025 at 7:41 PM
🔥Our paper "BioX-CPath: Biologically-driven explainable Diagnostics for Multistain IHC omputational Pathology" was accepted at #CVPR2025!
🚀We'll be releasing the paper and code repo ASAP, stay tuned! #multistain #IHC #pathology #ExplainableAI #GNNs #PrecisionMedicine #Immunolog
🚀We'll be releasing the paper and code repo ASAP, stay tuned! #multistain #IHC #pathology #ExplainableAI #GNNs #PrecisionMedicine #Immunolog
February 28, 2025 at 3:06 PM
🔥Our paper "BioX-CPath: Biologically-driven explainable Diagnostics for Multistain IHC omputational Pathology" was accepted at #CVPR2025!
🚀We'll be releasing the paper and code repo ASAP, stay tuned! #multistain #IHC #pathology #ExplainableAI #GNNs #PrecisionMedicine #Immunolog
🚀We'll be releasing the paper and code repo ASAP, stay tuned! #multistain #IHC #pathology #ExplainableAI #GNNs #PrecisionMedicine #Immunolog
June 12, 2025 at 4:10 PM
#CVPR2025 Six years have passed since the 'Computer Vision After 5 Years' workshop at CVPR 2019. In it, Bill Freeman predicted that vision-science-inspired algorithms would lead the way. Instead, the field is now dominated by generative AI and foundation models. 1/2
June 12, 2025 at 12:26 AM
#CVPR2025 Six years have passed since the 'Computer Vision After 5 Years' workshop at CVPR 2019. In it, Bill Freeman predicted that vision-science-inspired algorithms would lead the way. Instead, the field is now dominated by generative AI and foundation models. 1/2
Attending #CVPR2025 @cvprconference.bsky.social?
Dive into the latest in generative AI, creative tools, and visual content understanding.
🗓️ Date: June 12, 2025
🗓️ CVEU Workshop Schedule #CVPR2025
📍All times in Nashville Time
🔗Full program details can be found here: cveu.github.io
Dive into the latest in generative AI, creative tools, and visual content understanding.
🗓️ Date: June 12, 2025
🗓️ CVEU Workshop Schedule #CVPR2025
📍All times in Nashville Time
🔗Full program details can be found here: cveu.github.io
June 10, 2025 at 11:34 PM
Attending #CVPR2025 @cvprconference.bsky.social?
Dive into the latest in generative AI, creative tools, and visual content understanding.
🗓️ Date: June 12, 2025
🗓️ CVEU Workshop Schedule #CVPR2025
📍All times in Nashville Time
🔗Full program details can be found here: cveu.github.io
Dive into the latest in generative AI, creative tools, and visual content understanding.
🗓️ Date: June 12, 2025
🗓️ CVEU Workshop Schedule #CVPR2025
📍All times in Nashville Time
🔗Full program details can be found here: cveu.github.io
What happens when production-ready digital humans hit #CVPR2025?
Real-time avatars, SMPL magic, on-prem pipelines & cross-industry impact.
This is just the start.
🎥 Recap: youtu.be/quA75f25r78
📩 sales@meshcapade.com
#AI #SMPL #DigitalHumans
Real-time avatars, SMPL magic, on-prem pipelines & cross-industry impact.
This is just the start.
🎥 Recap: youtu.be/quA75f25r78
📩 sales@meshcapade.com
#AI #SMPL #DigitalHumans
Meshcapade at CVPR 2025: Behind the Scenes
YouTube video by Meshcapade
youtu.be
June 20, 2025 at 2:19 PM
What happens when production-ready digital humans hit #CVPR2025?
Real-time avatars, SMPL magic, on-prem pipelines & cross-industry impact.
This is just the start.
🎥 Recap: youtu.be/quA75f25r78
📩 sales@meshcapade.com
#AI #SMPL #DigitalHumans
Real-time avatars, SMPL magic, on-prem pipelines & cross-industry impact.
This is just the start.
🎥 Recap: youtu.be/quA75f25r78
📩 sales@meshcapade.com
#AI #SMPL #DigitalHumans
Home stretch on #CVPR2025/#ICML2025 submissions 😅 Time to bring out the BIG knives, serious cuts incoming. LFG 🗡️
January 30, 2025 at 3:40 PM
Home stretch on #CVPR2025/#ICML2025 submissions 😅 Time to bring out the BIG knives, serious cuts incoming. LFG 🗡️
This year, #CVPR2025 received 13,008 valid submissions that underwent the review process. The program committee recommended 2878 papers for acceptance, resulting in an acceptance rate of 22.1%.
February 28, 2025 at 1:44 AM
This year, #CVPR2025 received 13,008 valid submissions that underwent the review process. The program committee recommended 2878 papers for acceptance, resulting in an acceptance rate of 22.1%.
March 4, 2025 at 1:24 AM
Object masks &tracks for HD-EPIC have been released.. This completes our highly-detailed annotations.
Also, HD-EPIC VQA challenge is open [Leaderboard closes 19 May]... can you be 1st winner?
codalab.lisn.upsaclay.fr/competitions...
Btw, HD-EPIC was accepted @cvprconference.bsky.social #CVPR2025
Also, HD-EPIC VQA challenge is open [Leaderboard closes 19 May]... can you be 1st winner?
codalab.lisn.upsaclay.fr/competitions...
Btw, HD-EPIC was accepted @cvprconference.bsky.social #CVPR2025
🛑📢
HD-EPIC: A Highly-Detailed Egocentric Video Dataset
hd-epic.github.io
arxiv.org/abs/2502.04144
New collected videos
263 annotations/min: recipe, nutrition, actions, sounds, 3D object movement &fixture associations, masks.
26K VQA benchmark to challenge current VLMs
1/N
HD-EPIC: A Highly-Detailed Egocentric Video Dataset
hd-epic.github.io
arxiv.org/abs/2502.04144
New collected videos
263 annotations/min: recipe, nutrition, actions, sounds, 3D object movement &fixture associations, masks.
26K VQA benchmark to challenge current VLMs
1/N
April 3, 2025 at 7:06 PM
Object masks &tracks for HD-EPIC have been released.. This completes our highly-detailed annotations.
Also, HD-EPIC VQA challenge is open [Leaderboard closes 19 May]... can you be 1st winner?
codalab.lisn.upsaclay.fr/competitions...
Btw, HD-EPIC was accepted @cvprconference.bsky.social #CVPR2025
Also, HD-EPIC VQA challenge is open [Leaderboard closes 19 May]... can you be 1st winner?
codalab.lisn.upsaclay.fr/competitions...
Btw, HD-EPIC was accepted @cvprconference.bsky.social #CVPR2025
💻We've released the code for our #CVPR2025 paper MAtCha!
🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...
...While also working with dense-view datasets (hundreds of images)!
🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...
...While also working with dense-view datasets (hundreds of images)!
April 3, 2025 at 10:33 AM
💻We've released the code for our #CVPR2025 paper MAtCha!
🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...
...While also working with dense-view datasets (hundreds of images)!
🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...
...While also working with dense-view datasets (hundreds of images)!
📢📢📢 Submit to our workshop on Physics-inspired 3D Vision and Imaging at #CVPR2025!
Speakers 🗣️ include Ioannis Gkioulekas, Laura Waller, Berthy Feng, @shwbaek.bsky.social and Gordon Wetzstein!
🌐 pi3dvi.github.io
You can also just come hangout with us at the workshop @cvprconference.bsky.social!
Speakers 🗣️ include Ioannis Gkioulekas, Laura Waller, Berthy Feng, @shwbaek.bsky.social and Gordon Wetzstein!
🌐 pi3dvi.github.io
You can also just come hangout with us at the workshop @cvprconference.bsky.social!
March 13, 2025 at 6:47 PM
📢📢📢 Submit to our workshop on Physics-inspired 3D Vision and Imaging at #CVPR2025!
Speakers 🗣️ include Ioannis Gkioulekas, Laura Waller, Berthy Feng, @shwbaek.bsky.social and Gordon Wetzstein!
🌐 pi3dvi.github.io
You can also just come hangout with us at the workshop @cvprconference.bsky.social!
Speakers 🗣️ include Ioannis Gkioulekas, Laura Waller, Berthy Feng, @shwbaek.bsky.social and Gordon Wetzstein!
🌐 pi3dvi.github.io
You can also just come hangout with us at the workshop @cvprconference.bsky.social!
costs. Our method achieved 1st place in the CVPR2025 NTIRE Low Light Enhancement Challenge. Extensive experiments conducted on synthetic and real-world benchmark datasets demonstrate that the proposed method significantly outperforms state-of-the-art [5/6 of https://arxiv.org/abs/2504.19295v1]
April 29, 2025 at 6:05 AM
costs. Our method achieved 1st place in the CVPR2025 NTIRE Low Light Enhancement Challenge. Extensive experiments conducted on synthetic and real-world benchmark datasets demonstrate that the proposed method significantly outperforms state-of-the-art [5/6 of https://arxiv.org/abs/2504.19295v1]
🚀 Excited to announce our #CVPR2025 paper: CAV-MAE Sync: Improving Contrastive Audio-Visual Mask Autoencoders via Fine-Grained Alignment!
We introduce a simple yet effective method for improved audio-visual learning.
🔗 Project: edsonroteia.github.io/cav-mae-sync/
🧵 (1/7)👇
We introduce a simple yet effective method for improved audio-visual learning.
🔗 Project: edsonroteia.github.io/cav-mae-sync/
🧵 (1/7)👇
CAV-MAE Sync: Improving Contrastive Audio-Visual Mask Autoencoders via Fine-Grained Alignment
CAV-MAE Sync: Improving Contrastive Audio-Visual Mask Autoencoders via Fine-Grained Alignment
edsonroteia.github.io
May 22, 2025 at 1:46 PM
🚀 Excited to announce our #CVPR2025 paper: CAV-MAE Sync: Improving Contrastive Audio-Visual Mask Autoencoders via Fine-Grained Alignment!
We introduce a simple yet effective method for improved audio-visual learning.
🔗 Project: edsonroteia.github.io/cav-mae-sync/
🧵 (1/7)👇
We introduce a simple yet effective method for improved audio-visual learning.
🔗 Project: edsonroteia.github.io/cav-mae-sync/
🧵 (1/7)👇