Stop labeling affordances or distilling them from VLMs.
Extract affordances from bimanual human videos instead!
Excited to share 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos, accepted at #ICCV2025! 🎉
🧵1/5
Stop labeling affordances or distilling them from VLMs.
Extract affordances from bimanual human videos instead!
Excited to share 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos, accepted at #ICCV2025! 🎉
🧵1/5
Congratulations to the winners of the Best Paper Awards: EgoDex & DexWild!
The full recording is available at: youtu.be/64yLApbBZ7I
Some highlights:
Congratulations to the winners of the Best Paper Awards: EgoDex & DexWild!
The full recording is available at: youtu.be/64yLApbBZ7I
Some highlights:
We’re bringing together researchers exploring how egocentric perception can drive next-gen robot learning!
🔗 Full info: egoact.github.io/rss2025
@roboticsscisys.bsky.social
We’re bringing together researchers exploring how egocentric perception can drive next-gen robot learning!
🔗 Full info: egoact.github.io/rss2025
@roboticsscisys.bsky.social