Long Le
banner
vlongle.bsky.social
Long Le
@vlongle.bsky.social
PhD student at University of Pennsylvania . Working on robot learning. https://vlongle.github.io/
Cool! I love your blenderproc code!
December 10, 2024 at 6:50 PM
🚀 How well does it work?

Articulate-Anything is much better than the baselines both quantitatively and qualitatively. This is possible due to (1) leveraging richer input modalities, (2) modeling articulation as a high-level program synthesis, (3) leveraging a closed-loop actor-critic system
December 10, 2024 at 4:44 PM
🧩How does it work?

Articulate-Anything breaks the problem into three steps: (1) Mesh retrieval, (2) Link placement, which spatially arranges the parts together, and (3) Joint prediction, which determines the kinematic movement between parts. Take a look at a video explaining this pipeline!
December 10, 2024 at 4:44 PM
🙀Why does this matter?

Creating interactable 3D models of the world is hard. An artist have to model the physical appearance of the object to create a mesh. Then a roboticist needs to manually annotate the kinematic joints to give object movement in URDF.
But what we can automate all these steps?
December 10, 2024 at 4:44 PM
Hello, can I be added please 🙏? I work on robot/reinforcement learning and we have a cool sim2real RL paper in submission!
November 14, 2024 at 3:55 PM
Hello! Can I be added please 🙏? I work on 3d vision for robot learning
November 14, 2024 at 3:12 PM