Yixuan Wang
yixuanwang.bsky.social
Yixuan Wang
@yixuanwang.bsky.social
Columbia CS PhD working on robotics. Worked at Boston Dynamics AI Institute and Google X.
https://wangyixuan12.github.io/
Fun fact – this work is also recognized as the best embodied AI poster at Michigan AI symposium - an amazing and fun event happened at my alma mater 💙💛
January 24, 2025 at 4:45 PM
Thanks to my amazing collaborators - Leonor, Tarik, Jiuguang, and Yunzhu!! This project is impossible without their support!! I also want to thank a lot of amazing folks from Boston Dynamics AI Institute and it has been an amazing experience intern experience! (9/9)
January 24, 2025 at 4:44 PM
⬇️ Links to our project. Stay tuned for the code release!

🔗 Website: curiousbot.theaiinstitute.com
📷 Video: youtu.be/1fK9-OrSwpQ
📄 Paper: arxiv.org/abs/2501.13338

(8/9)
CuriousBot
CuriousBot: Interactive Mobile Exploration via Actionable 3D Relational Object Graph
curiousbot.theaiinstitute.com
January 24, 2025 at 4:43 PM
How well does our system work? We conduct failure analysis and breakdown failure reasons. We found that perception, decision, and action execution are still major failure reasons, which we want to address in the future. (7/9)
January 24, 2025 at 4:42 PM
What does the system look like? We build a perception module upon visual foundation models and SLAM to build the actionable 3D relational object graph. Then we serialize graphs and input into foundation models to make decision and execute low-level robot skills. (6/9)
January 24, 2025 at 4:42 PM
We show that our system can explore diverse environments, such as house-like environments and deformable object, and deploy various robot skills, including checking bottom, opening, lifting, pushing, and flipping. (5/9)
January 24, 2025 at 4:41 PM
Why bothering to build the actionable 3D relational object graph?
Imagine you want your robot to collect toys spreading and being hidden in the house. This representation can not only guide robot to find all toys but also be used to collect all toys into its blanket. (4/9)
January 24, 2025 at 4:40 PM
Inspired by human example, we build an **actionable 3D relational object graph** to (1) reason object relations and (2) decide actions for exploration. This clip shows how robot (1) localize unknown spaces and (2) execute skills such as opening, lifting, and pushing. (3/9)
January 24, 2025 at 4:39 PM
How does human interactively explore the environment?
Human see – we often understand object relations first, such as the space **inside** the cabinet or **behind** the chair.
Human do – then we apply actions to reveal the unknown space, such as opening or pushing. (2/9)
January 24, 2025 at 4:38 PM
Thanks!
November 19, 2024 at 3:23 PM
Thank you Chris for the great list! I am a PhD student at Columbia working on robotics. Could you please add me to the list? Thanks!
November 19, 2024 at 3:19 PM