I like robots!
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. All we need here is self-supervised data collection and learning.
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. All we need here is self-supervised data collection and learning.
1. RGBD + Pose data
2. Audio from the mic or custom contact microphones
3. Seamless Bluetooth integration for external sensors
1. RGBD + Pose data
2. Audio from the mic or custom contact microphones
3. Seamless Bluetooth integration for external sensors
We call this method Prescriptive Point Priors for robot Policies or P3-PO in short. Full project is here: point-priors.github.io
We call this method Prescriptive Point Priors for robot Policies or P3-PO in short. Full project is here: point-priors.github.io
BAKU is modular, language-conditioned, compatible with multiple sensor streams & action multi-modality, and importantly fully open-source!
BAKU is modular, language-conditioned, compatible with multiple sensor streams & action multi-modality, and importantly fully open-source!
To start of, Robot Utility Models, which enables zero-shot deployment. In the video below, the robot hasnt seen these doors before.
To start of, Robot Utility Models, which enables zero-shot deployment. In the video below, the robot hasnt seen these doors before.
Project was led by Irmak Guzey w/ Yinlong Dai, Georgy Savva and Raunaq Bhirangi.
More details: object-rewards.github.io
Project was led by Irmak Guzey w/ Yinlong Dai, Georgy Savva and Raunaq Bhirangi.
More details: object-rewards.github.io
We just released ViSk, where skin sensing is used to train fine-grained policies with ~1 hour of data. I have attached a single-take video on this post.
visuoskin.github.io
We just released ViSk, where skin sensing is used to train fine-grained policies with ~1 hour of data. I have attached a single-take video on this post.
visuoskin.github.io