#cvpr2024
Zilong Guo, Yi Luo, Long Sha, Dongxu Wang, Panqu Wang, Chenyang Xu, Yi Yang
2nd Place Solution for CVPR2024 E2E Challenge: End-to-End Autonomous Driving Using Vision Language Model
https://arxiv.org/abs/2509.02659
September 4, 2025 at 8:00 AM
Zilong Guo, Yi Luo, Long Sha, Dongxu Wang, Panqu Wang, Chenyang Xu, Yi Yang: 2nd Place Solution for CVPR2024 E2E Challenge: End-to-End Autonomous Driving Using Vision Language Model https://arxiv.org/abs/2509.02659 https://arxiv.org/pdf/2509.02659 https://arxiv.org/html/2509.02659
September 4, 2025 at 6:30 AM
Nice closing note for my #CVPR2024 lead GC journey: the conference just won the IEEE iCON award for outstanding achievement in conference innovation at IEEE Convene 2025. We had an exceptionally excellent organizing committee last year, and this award is a testament to that!
August 1, 2025 at 3:30 PM
After #CVPR2024, we further investigate beyond-Euclidean computer vision and extend the fundamental task of dimensionality reduction to the largely uncharted Finsler manifolds, with core applications in data analysis and visualisation!
March 25, 2025 at 7:32 AM
Motivation: after Denys exploited issues in our S23DR #CVPR2024 competition metric -- wireframe edit distance (WED), we decided to study it systematically.

We asked 3D modelers, CV people (us)and normal people to rank many wireframes.
Designing the labeling setup is not easy.
2/
March 12, 2025 at 8:29 AM
A standout moment from the #CVPR2024 Seattle musical performance!
March 5, 2025 at 8:43 PM
All booked for #CVPR2025. Missed #CVPR2024, looking forward to connect with the community again.
March 2, 2025 at 7:41 PM
January 25, 2025 at 9:00 PM
🧵 1/3 Many at #CVPR2024 & #ECCV2024 asked what would be next in our workshop series.

We're excited to announce "How to Stand Out in the Crowd?" at #CVPR2025 Nashville - our 4th community-building workshop featuring this incredible speaker lineup!

🔗 sites.google.com/view/standou...
January 13, 2025 at 4:16 PM
Where do we get data for training relighting models? 🤔

We used our good old StyLitGAN from #CVPR2024 to generate diverse training data and filter it to get the top 1000 using CLIP. We combine this with the existing MIT Multi-Illum & Big Time datasets. About ~2500 unique images make our training set
December 5, 2024 at 3:58 PM
#CVPR2025 received just over 13,000 submissions – up by about 13% from #CVPR2024.

If CVPR was a stock it would tank, as the growth didn’t meet the street’s expectation 🤓
November 28, 2024 at 4:32 PM
ScaLR is making one step toward 3D foundational models! 🙌 The self-supervised features it learns are quite close to fully-supervised 3D semantic segmentation methods. Still more to do, but quite excited about the progress! 🌟 #CVPR2024
📢 Relive the #CVPR2024 poster ScaLR w/ Gilles Puy. It's a Lidar pretrained model distilled from vision foundation models w/o any labels.
💪 Our best ScaLR is publicly available & well suited for semantic tasks w/ few or no labels & still SoTA results on linear probing
www.linkedin.com/posts/andrei...
Andrei Bursuc on LinkedIn: #cvpr2024 #cvpr
📢 Re-live the poster presentation of ScaLR at #CVPR2024 by the lead author Gilles Puy. The ScaLR family consists of our latest Lidar pretrained models…
www.linkedin.com
November 27, 2024 at 10:13 AM
📢 Relive the #CVPR2024 poster ScaLR w/ Gilles Puy. It's a Lidar pretrained model distilled from vision foundation models w/o any labels.
💪 Our best ScaLR is publicly available & well suited for semantic tasks w/ few or no labels & still SoTA results on linear probing
www.linkedin.com/posts/andrei...
Andrei Bursuc on LinkedIn: #cvpr2024 #cvpr
📢 Re-live the poster presentation of ScaLR at #CVPR2024 by the lead author Gilles Puy. The ScaLR family consists of our latest Lidar pretrained models…
www.linkedin.com
November 27, 2024 at 9:22 AM
Get your method on the #BOP leaderboard and let citations go brrrr.

The deadline of this year's challenge was intentionally put after the #CVPR2024 deadline, so the latest and greatest methods can participate.
Submitted a 6D Object Pose Estimation method at CVPR? 📝

Show the world that it actually works in practice and join the BOP challenge. 🦾

7 days left to win the BOP 2024 awards in the model-based and model-free tracks. 🏆
BOP: Benchmark for 6D Object Pose Estimation
bop.felk.cvut.cz
November 22, 2024 at 7:37 PM
ICYMI our PointBeV #CVPR2024 poster here's a quick talk by lead author Loïck Chambon.
It brings a change of paradigm in multi-camera bird's-eye-view (BeV) segmentation via a flexible mechanism to produce sparse BeV points that can adapt to situation, task, compute

www.linkedin.com/posts/andrei...
Andrei Bursuc on LinkedIn: #cvpr2024 #cvpr
In case you missed our PointBeV poster at #CVPR2024 here's a quick presentation by the lead author Loïck C.. PointBEV brings a change of paradigm in…
www.linkedin.com
November 22, 2024 at 11:18 AM
Nice CVPR pump up video from the IEEE Computer Society highlighting #CVPR2024:

www.youtube.com/watch?v=8B-L...
CVPR 2024 Recap and Highlights
YouTube video by IEEEComputerSociety
www.youtube.com
November 21, 2024 at 1:46 AM
One more for @alfcnz.bsky.social. What do you think about this poster? The backstory is #CVPR2024 lost their poster 😱
November 19, 2024 at 2:43 AM
Asra Aslam, Sachini Herath, Ziqi Huang, Estefania Talavera, Deblina Bhattacharjee, Himangi Mittal, Vanessa Staderini, Mengwei Ren, Azade Farshad
WiCV@CVPR2024: The Thirteenth Women In Computer Vision Workshop at the Annual CVPR Conference
https://arxiv.org/abs/2411.02445
November 6, 2024 at 5:01 AM
CVPR2024 で "YolOOD: Utilizing Object Detection Concepts for Multi-Label Out-of-Distribution Detection" について発表しました | fltech - 富士通研究所の技術ブログ
CVPR2024 で "YolOOD: Utilizing Object Detection Concepts for Multi-Label Out-of-Distribution Detection" について発表しました - fltech - 富士通研究所の技術ブログ
こんにちは、人工知能研究所の江田です。富士通では、AIの安全な利用を可能にする技術の開発を行っています。 この度研究成果の一つを、機械学習・コンピュータビジョンの最難関会議の一つであるCVPRにて発表しましたので、その内容を紹介します。
blog.fltech.dev
August 11, 2024 at 6:10 PM
画像認識世界最高峰の学会、CVPR2024のコンペ「Ego4D EgoSchema Challenge」でパナソニック コネクトが世界で... - パナソニックグループ https://prtimes.jp/main/html/rd/p/000005803.000003442.html
July 16, 2024 at 2:52 AM
ZOZOの新卒1年目MLエンジニアが行くCVPR 2024 参加レポート - ZOZO TECH BLOG
https://techblog.zozo.com/entry/cvpr2024-report
ZOZOの新卒1年目MLエンジニアが行くCVPR 2024 参加レポート - ZOZO TECH BLOG
6/17-6/21にシアトルで開催されたコンピュータビジョン・パターン認識分野の国際会議CVPR 2024の参加レポートです。
techblog.zozo.com
July 11, 2024 at 3:04 AM
Proud to share our new #CVPR2024 paper! Discover how RIOS and ANU improve NeRF by modeling transparent objects with our innovative neural surface refinement technique. #NeRF #AI #3DVision

Project page
tnsr.rios.ai
July 4, 2024 at 9:49 PM
今日のZennトレンド

第3世代の自動運転@CVPR2024
この記事は、CVPR 2024 で発表された自動運転に関する最新研究を紹介しています。
特に、LLM や VLM といった基盤モデルを活用した第3世代の自動運転技術に焦点を当て、具体的な論文とその内容、成果などを解説しています。
さらに、自動運転データセットの進化や、カメラ画像のみで運転を実現した CarLLaVA のような革新的な研究成果も紹介しています。
第3世代の自動運転@CVPR2024
はじめにTuring 生成AIチームの佐々木 (kento_sasaki1)です。生成AIチームでは、完全自動運転の実現に向けてマルチモーダル基盤モデルの開発に取り組んでいます。先日、6月17日から6月21日にシアトルで開催されたコンピュータビジョン・機械学習系のトップカンファレンスCVPR 2024に参加し、Vision Language Model (VLM)のワークショップThe 3rd
zenn.dev
July 1, 2024 at 9:16 PM