Peiqi Liu
peiqiliu.bsky.social
Peiqi Liu
@peiqiliu.bsky.social
NYU 24'
Research Assistant @ NYU
Robotics Engineer @ Hello Robot Inc
https://peiqi-liu.github.io
Reposted by Peiqi Liu
We just released RUKA, a $1300 humanoid hand that is 3D-printable, strong, precise, and fully open sourced!

The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. All we need here is self-supervised data collection and learning.
April 18, 2025 at 6:53 PM
Reposted by Peiqi Liu
When life gives you lemons, you pick them up.

(trained with robotutilitymodels.com)
March 28, 2025 at 4:02 AM
Reposted by Peiqi Liu
The robot behaviors shown below are trained without any teleop, sim2real, genai, or motion planning. Simply show the robot a few examples of doing the task yourself, and our new method, called Point Policy, spits out a robot-compatible policy!
February 28, 2025 at 7:09 PM
Reposted by Peiqi Liu
We just released AnySense, an iPhone app for effortless data acquisition and streaming for robotics. We leverage Apple’s development frameworks to record and stream:

1. RGBD + Pose data
2. Audio from the mic or custom contact microphones
3. Seamless Bluetooth integration for external sensors
February 26, 2025 at 3:14 PM
Reposted by Peiqi Liu
What's new in the Stretch community this month?

❄️ Gazebo Harmonic
❄️ Dynamic semantic maps for open-vocabulary tasks
❄️ Natural-language narration of robot experiences
❄️ Implicit human-robot communication

And more! Follow the link below for more details:

hello-robot.com/community-up...
Stretch Community News - February 2025 — Hello Robot
Gazebo Harmonic, Stretch AI, Dynamic memory, and more!
hello-robot.com
February 6, 2025 at 12:15 AM
DynaMem is now fully refactored & integrated into Stretch AI repo!
Try it out: github.com/hello-robot/...
Project page: dynamem.github.io
DynaMem: Online Dynamic Spatio-Semantic Memory for Open World Mobile Manipulation
DynaMem is an OVMM system adapting a spatio semantic memory to dynamically changing environments.
dynamem.github.io
December 31, 2024 at 10:12 PM
Reposted by Peiqi Liu
We all want a home robot that can actually help us out. Why can't I ask my robot "where did I leave my water bottle" and get a good answer?

In Graph-EQA, we build a 3d memory as the robot explores, using that memory to make decisions.

saumyasaxena.github.io/grapheqa/
December 30, 2024 at 4:20 PM
Reposted by Peiqi Liu
A look at the future: chatting with my robot via Discord to ask it to find something in my house.

This uses an LLM to understand what the human wants and generate a task plan, then builds an open-vocab 3d scene representation to find and pick up objects
December 31, 2024 at 4:38 PM
Reposted by Peiqi Liu
I'd like to introduce what I've been working at @hellorobot.bsky.social: Stretch AI, a set of open-source tools for language-guided autonomy, exploration, navigation, and learning from demonstration.

Check it out: github.com/hello-robot/...

Thread ->
December 3, 2024 at 4:51 PM
Reposted by Peiqi Liu
New paper! We show that by using keypoint-based image representation, robot policies become robust to different object types and background changes.

We call this method Prescriptive Point Priors for robot Policies or P3-PO in short. Full project is here: point-priors.github.io
December 10, 2024 at 8:32 PM
Reposted by Peiqi Liu
Modern policy architectures are unnecessarily complex. In our #NeurIPS2024 project called BAKU, we focus on what really matters for good policy learning.

BAKU is modular, language-conditioned, compatible with multiple sensor streams & action multi-modality, and importantly fully open-source!
December 9, 2024 at 11:33 PM
Reposted by Peiqi Liu
Since we are nearing the end of the year, I'll revisit some of our work I'm most excited about from the last year and maybe a sneak peek of what we are up to next.

To start of, Robot Utility Models, which enables zero-shot deployment. In the video below, the robot hasnt seen these doors before.
December 8, 2024 at 2:32 AM