Long Le
banner
vlongle.bsky.social
Long Le
@vlongle.bsky.social
PhD student at University of Pennsylvania . Working on robot learning. https://vlongle.github.io/
🧩How does it work?

Articulate-Anything breaks the problem into three steps: (1) Mesh retrieval, (2) Link placement, which spatially arranges the parts together, and (3) Joint prediction, which determines the kinematic movement between parts. Take a look at a video explaining this pipeline!
December 10, 2024 at 4:44 PM
📦 Can frontier AI transform ANY physical object from ANY input modality into a high-quality digital twin that also MOVES?

Excited to share our work,Articulate-Anything 🐵, exploring how VLMs can bridge the gap between the physical and digital worlds.

Website: articulate-anything.github.io
December 10, 2024 at 4:44 PM