Lerrel Pinto
banner
lerrelpinto.com
Lerrel Pinto
@lerrelpinto.com
Assistant Professor of CS @nyuniversity.

I like robots!
This project, which combines hardware design with learning-based controllers was a monumental effort led by @anyazorin.bsky.social and Irmak Guzey. More links and information about RUKA are below:

Website: ruka-hand.github.io
Assembly Instructions: ruka.gitbook.io/instructions
April 18, 2025 at 6:53 PM
This would be funny! 😂
March 29, 2025 at 7:23 PM
This project was an almost solo effort from @haldarsiddhant.bsky.social. And as always, this project is fully opensourced.

Project page: point-policy.github.io
Paper: arxiv.org/abs/2502.20391
Point Policy: Unifying Observations and Actions with Key Points for Robot Manipulation
Point Policy: Unifying Observations and Actions with Key Points for Robot Manipulation
point-policy.github.io
February 28, 2025 at 7:09 PM
The overall algorithm is simple:
1. Extract key points from human videos.
2. Train a transformer policy to predict future robot key points.
3. Convert predicted key points to robot actions.
February 28, 2025 at 7:09 PM
Point Policy uses sparse key points to represent both human demonstrators and robots, bridging the morphology gap. The scene is hence encoded through semantically meaningful key points from minimal human annotations.
February 28, 2025 at 7:09 PM
It should be accessible in EU now!
February 26, 2025 at 4:46 PM
AnySense is built to empower researchers with better tools for robotics. Try it out below.

Download on App store: apps.apple.com/us/app/anyse...
Open-source code on GitHub: github.com/NYU-robot-le...
Website: anysense.app

AnySense is led by @raunaqb.bsky.social with several from NYU.
‎AnySense
‎AnySense is an open-source iPhone app that enables multi-sensory data collection by integrating the iPhone’s sensory suite with external sensors via Bluetooth and wired interfaces, enabling both offl...
apps.apple.com
February 26, 2025 at 3:14 PM
With this 'wild' robot data, data collected by AnySense can then be used to train multimodal policies! In the video above, we use the Robot Utility Models framework to train Visuo-Tactile policies for a whiteboard erasing task. You can use it for so much more though!
February 26, 2025 at 3:14 PM
Thanks Tucker! The timing of this is great given the uncertainty with other funding mechanisms.
February 18, 2025 at 6:00 PM