Snehal Jauhri
banner
snehaljauhri.bsky.social
Snehal Jauhri
@snehaljauhri.bsky.social
ML for Robotics | PhD candidate @ TU Darmstadt with Georgia Chalvatzaki | Research Intern @AllenAI | Building AI for the Home Robot | https://pearl-lab.com/people/snehal-jauhri
More details and results in the paper, and stay tuned for the 2HANDS dataset & code release!

📄Paper: arxiv.org/abs/2503.09320
🌐 Website: sites.google.com/view/2handedafforder

Work done with Marin Heidinger, Vignesh Prasad & @georgiachal.bsky.social

See you in Hawaii at #ICCV2025! 🌴
2HandedAfforder
Marvin Heidinger*, Snehal Jauhri*, Vignesh Prasad, and Georgia Chalvatzaki PEARL Lab, TU Darmstadt, Germany * Equal contribution International Conference on Computer Vision (ICCV) 2025
sites.google.com
July 14, 2025 at 4:03 AM
We can then use our high-quality dataset to train or fine-tune a VLM that takes in the activity/task text prompt as input and predicts bimanual affordance masks (for a left and right robot hand)

4/5
July 14, 2025 at 4:03 AM
We extract bimanual affordance masks from egocentric RGB video datasets using video-based hand inpainting and object reconstruction.

No manual labeling is required. The narrations from egocentric datasets also provide free-form text supervision! (Eg. "pour milk into bowl")

3/5
July 14, 2025 at 4:03 AM
The Problem:
Most affordance detection methods just segment object parts & do not predict actionable regions for robots!

Our solution?
Use egocentric bimanual human videos to extract precise affordance regions considering object relationships, context, & hand coordination!

2/5
July 14, 2025 at 4:03 AM
Learn more at the workshop website: egoact.github.io/rss2025

Happy to be organizing this with @georgiachal.bsky.social, Yu Xiang, @danfei.bsky.social and @galasso.bsky.social!
Web home for EgoAct: 1st Workshop on Egocentric Perception and Action for Robot Learning @ RSS2025
egoact.github.io
April 6, 2025 at 9:24 PM
Call for Contributions:
We’re inviting contributions in the form of:
📝 Full papers OR
📝 4-page extended abstracts
🗓️ Submission Deadline: April 30, 2025
🏆 Best Paper Award, sponsored by Meta!
April 6, 2025 at 9:24 PM
Core workshop topics include:
🥽 Egocentric interfaces for robot learning
🧠 High-level action & scene understanding
🤝 Human-to-robot transfer
🧱 Foundation models from human activity datasets
🛠️ Egocentric world models for high-level planning & low-level manipulation
April 6, 2025 at 9:24 PM
I'm working on Robot learning and perception : )
November 23, 2024 at 7:36 AM