3D Human-centric Perception & Synthesis: bodies, hands, objects.
Past: MPI for Intelligent Systems, Univ. of Bonn, Aristotle Univ. of Thessaloniki
Website: https://dtzionas.com
Authors: G. Paschalidis, R. Wilschut, D. Antić, O. Taheri, D. Tzionas
Colab: University of Amsterdam, MPI for Intelligent Systems
Project: gpaschalidis.github.io/cwgrasp
Paper: arxiv.org/abs/2408.16770
Code: github.com/gpaschalidis...
🧵 10/10
Authors: G. Paschalidis, R. Wilschut, D. Antić, O. Taheri, D. Tzionas
Colab: University of Amsterdam, MPI for Intelligent Systems
Project: gpaschalidis.github.io/cwgrasp
Paper: arxiv.org/abs/2408.16770
Code: github.com/gpaschalidis...
🧵 10/10
You can easily integrate these into your code & build new research!
🧩 CGrasp: github.com/gpaschalidi...
🧩 CReach: github.com/gpaschalidi...
🧩 ReachingField: github.com/gpaschalidi...
🧩 CWGrasp: github.com/gpaschalidi...
🧵 9/10
You can easily integrate these into your code & build new research!
🧩 CGrasp: github.com/gpaschalidi...
🧩 CReach: github.com/gpaschalidi...
🧩 ReachingField: github.com/gpaschalidi...
🧩 CWGrasp: github.com/gpaschalidi...
🧵 9/10
👉 requires 500x less samples & runs 10x faster than SotA,
👉 produces grasps that are perceived as more realistic than SotA ~70% of the times,
👉 works well for objects placed at various "heights" from the floor,
👉 generates both right- & left-hand grasps.
🧵 8/10
👉 requires 500x less samples & runs 10x faster than SotA,
👉 produces grasps that are perceived as more realistic than SotA ~70% of the times,
👉 works well for objects placed at various "heights" from the floor,
👉 generates both right- & left-hand grasps.
🧵 8/10
👉 This produces a hand-only guiding grasp & a reaching body that are already mutually compatible!
🎯 Thus, we need to conduct a *small* refinement *only* for the body so that its fingers match the guiding hand!
🧵 7/10
👉 This produces a hand-only guiding grasp & a reaching body that are already mutually compatible!
🎯 Thus, we need to conduct a *small* refinement *only* for the body so that its fingers match the guiding hand!
🧵 7/10
👉 Importantly, the palm & arm direction satisfy a desired (condition) 3D direction vector!
👉 This direction is sampled from ⚙️ ReachingField!
🧵 6/10
👉 Importantly, the palm & arm direction satisfy a desired (condition) 3D direction vector!
👉 This direction is sampled from ⚙️ ReachingField!
🧵 6/10
👉 Objects near the ground are likely grasped from high above
👉 Objects high above the ground are likely grasped from below
🧵 5/10
👉 Objects near the ground are likely grasped from high above
👉 Objects high above the ground are likely grasped from below
🧵 5/10
CWGrasp - consists of three novel models:
👉 ReachingField,
👉 CGrasp,
👉 CReach.
🧵 4/10
CWGrasp - consists of three novel models:
👉 ReachingField,
👉 CGrasp,
👉 CReach.
🧵 4/10
This is inspired by FLEX [Tendulkar et al] that:
👉 generates a guiding hand-only grasp,
👉 generates many random bodies,
👉 post-processes the guiding hand to match the body, & the body to match the guiding hand.
🧵 3/10
This is inspired by FLEX [Tendulkar et al] that:
👉 generates a guiding hand-only grasp,
👉 generates many random bodies,
👉 post-processes the guiding hand to match the body, & the body to match the guiding hand.
🧵 3/10
👉 the body needs to plausibly reach the object,
👉 fingers need to dexterously grasp the object,
👉 hand pose and object pose need to look compatible with each other, and
👉 training datasets for 3D whole-body grasps are really scarce.
🧵 2/10
👉 the body needs to plausibly reach the object,
👉 fingers need to dexterously grasp the object,
👉 hand pose and object pose need to look compatible with each other, and
👉 training datasets for 3D whole-body grasps are really scarce.
🧵 2/10