Snehal Jauhri
banner
snehaljauhri.bsky.social
Snehal Jauhri
@snehaljauhri.bsky.social
ML for Robotics | PhD candidate @ TU Darmstadt with Georgia Chalvatzaki | Research Intern @AllenAI | Building AI for the Home Robot | https://pearl-lab.com/people/snehal-jauhri
We can then use our high-quality dataset to train or fine-tune a VLM that takes in the activity/task text prompt as input and predicts bimanual affordance masks (for a left and right robot hand)

4/5
July 14, 2025 at 4:03 AM
We extract bimanual affordance masks from egocentric RGB video datasets using video-based hand inpainting and object reconstruction.

No manual labeling is required. The narrations from egocentric datasets also provide free-form text supervision! (Eg. "pour milk into bowl")

3/5
July 14, 2025 at 4:03 AM
The Problem:
Most affordance detection methods just segment object parts & do not predict actionable regions for robots!

Our solution?
Use egocentric bimanual human videos to extract precise affordance regions considering object relationships, context, & hand coordination!

2/5
July 14, 2025 at 4:03 AM
📢 PSA for the robotics community:
Stop labeling affordances or distilling them from VLMs.
Extract affordances from bimanual human videos instead!

Excited to share 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos, accepted at #ICCV2025! 🎉

🧵1/5
July 14, 2025 at 4:03 AM
Thank you to all the speakers & attendees for making the EgoAct workshop a great success!

Congratulations to the winners of the Best Paper Awards: EgoDex & DexWild!

The full recording is available at: youtu.be/64yLApbBZ7I

Some highlights:
June 23, 2025 at 1:04 AM
📢 Excited to announce EgoAct 🥽🤖: the 1st Workshop on Egocentric Perception and Action for Robot Learning at #RSS2025 in LA!

We’re bringing together researchers exploring how egocentric perception can drive next-gen robot learning!

🔗 Full info: egoact.github.io/rss2025

@roboticsscisys.bsky.social
April 6, 2025 at 9:24 PM