Chonghao Sima
banner
chonghaosima.bsky.social
Chonghao Sima
@chonghaosima.bsky.social
Ph.D. student at HKU. Researcher on computer vision, autonomous driving and robotics (starter). Hobbyist on hiking, j-pop, scenery photography and anime.
🚀 HERE WE GO! Join us at CVPR 2025 for a full-day tutorial: “Robotics 101: An Odyssey from a Vision Perspective”
🗓️ June 12 • 📍 Room 202B, Nashville

Meet our incredible lineup of speakers covering topics from agile robotics to safe physical AI at: opendrivelab.com/cvpr2025/tut...

#cvpr2025
June 10, 2025 at 12:29 AM
Thanks for sharing! I will host the workshop for the whole day and welcome anyone who is struggling with current embodied AI trend to visit and chat and exchange ideas! We want to hear the opposite opinions from vision and robotics people on the topic of autonomy.
When at @cvprconference.bsky.social a major challenge is how to split yourself for super amazing workshops.
I'm afraid to announce that w/ our workshop on "Embodied Intelligence for Autonomous Systems on the Horizon" we will make this choice even harder: opendrivelab.com/cvpr2025/wor... #cvpr2025
June 8, 2025 at 3:08 AM
Reposted by Chonghao Sima
When at @cvprconference.bsky.social a major challenge is how to split yourself for super amazing workshops.
I'm afraid to announce that w/ our workshop on "Embodied Intelligence for Autonomous Systems on the Horizon" we will make this choice even harder: opendrivelab.com/cvpr2025/wor... #cvpr2025
June 7, 2025 at 9:19 PM
Wonderful end-to-end driving benchmark! We are getting **closer and closer** to **close-loop** evaluation in real world!
Announcing the 2025 NAVSIM Challenge! What's new? We're testing not only on real recordings—but also imaginary futures generated from the real ones! 🤯

Two rounds: #CVPR2025 and #ICCV2025. $18K in prizes + several $1.5k travel grants. Submit in May for Round 1! opendrivelab.com/challenge2025/ 🧵👇
April 13, 2025 at 4:08 PM
DriveLM got 1k stars on GitHub, my first project reaching such milestone. Great thanks to all my collaborators who contribute much to this project, many thanks to the community who participate and contribute better insight upon this dataset, and wish this is not my end!
March 24, 2025 at 4:24 PM
Thanks for sharing! We long to know if we could improve e2e planner with limited but online data and compute, as performance with more training data seems plateau. However, online failure cases are unexplored as they couldn’t directly contribute to the model performance via previous training scheme.
🐎 Centaur, our first foray into test-time training for end-to-end driving. No retraining needed, just plug-and-play at deployment given a trained model. Also, theoretically nearly no overhead in latency with some clever use of buffers. Surprising how effective this is! arxiv.org/abs/2503.11650
March 17, 2025 at 1:51 PM
Random thoughts today: in humanoid research the methodology is basically decided by the final tasks/demo you would like to show off.
March 6, 2025 at 8:06 AM
Reposted by Chonghao Sima
🌟 Previewing the UniAD 2.0

🚀 A milestone upgrade on the codebase of the #CVPR2023 best paper UniAD.

👉 Check out this branch github.com/OpenDriveLab..., and we will get you more details soon
March 5, 2025 at 11:54 AM
Reposted by Chonghao Sima
🚀 This year, we’re bringing you three thrilling tracks in Embodied AI and Autonomous Driving, with a total prize pool of $100,000! Now get ready and join the competition!

Visit the challenge website: opendrivelab.com/challenge2025
And more on #CVPR2025: opendrivelab.com/cvpr2025
March 3, 2025 at 11:44 AM
Thanks for all the staff who work hard to make it happen! Love to hear your feedback.
🤖 We are thrilled to announce AgiBot World, the first large-scale robotic learning dataset designed to advance multi-purpose humanoid policies!

Github:
github.com/OpenDriveLab...

HuggingFace:
huggingface.co/agibot-world
December 30, 2024 at 11:40 AM
Random thoughts (again) on:

1. Benchmark & Evaluation & Metrics
2. Data collection (especially tele-op)
3. Policy network architecture & training receipt.
December 11, 2024 at 9:32 AM
Random thoughts today: situation in humanoids today is similar to autonomous driving back into 2020-ish. Different hardware setups, people more favor of RL-based planning and sim2real deployment, etc. Will humanoids get into a similar development curve like driving?
December 4, 2024 at 10:45 AM
Reposted by Chonghao Sima
We implemented undo in @rerun.io by storing the viewer state in the same type of in-memory database we use for the recorded data. Have a look (sound on!)
December 2, 2024 at 3:51 PM