The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. All we need here is self-supervised data collection and learning.
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. All we need here is self-supervised data collection and learning.
1. RGBD + Pose data
2. Audio from the mic or custom contact microphones
3. Seamless Bluetooth integration for external sensors
1. RGBD + Pose data
2. Audio from the mic or custom contact microphones
3. Seamless Bluetooth integration for external sensors
❄️ Gazebo Harmonic
❄️ Dynamic semantic maps for open-vocabulary tasks
❄️ Natural-language narration of robot experiences
❄️ Implicit human-robot communication
And more! Follow the link below for more details:
hello-robot.com/community-up...
❄️ Gazebo Harmonic
❄️ Dynamic semantic maps for open-vocabulary tasks
❄️ Natural-language narration of robot experiences
❄️ Implicit human-robot communication
And more! Follow the link below for more details:
hello-robot.com/community-up...
Try it out: github.com/hello-robot/...
Project page: dynamem.github.io
Try it out: github.com/hello-robot/...
Project page: dynamem.github.io
In Graph-EQA, we build a 3d memory as the robot explores, using that memory to make decisions.
saumyasaxena.github.io/grapheqa/
In Graph-EQA, we build a 3d memory as the robot explores, using that memory to make decisions.
saumyasaxena.github.io/grapheqa/
This uses an LLM to understand what the human wants and generate a task plan, then builds an open-vocab 3d scene representation to find and pick up objects
This uses an LLM to understand what the human wants and generate a task plan, then builds an open-vocab 3d scene representation to find and pick up objects
Check it out: github.com/hello-robot/...
Thread ->
Check it out: github.com/hello-robot/...
Thread ->
We call this method Prescriptive Point Priors for robot Policies or P3-PO in short. Full project is here: point-priors.github.io
We call this method Prescriptive Point Priors for robot Policies or P3-PO in short. Full project is here: point-priors.github.io
BAKU is modular, language-conditioned, compatible with multiple sensor streams & action multi-modality, and importantly fully open-source!
BAKU is modular, language-conditioned, compatible with multiple sensor streams & action multi-modality, and importantly fully open-source!
To start of, Robot Utility Models, which enables zero-shot deployment. In the video below, the robot hasnt seen these doors before.
To start of, Robot Utility Models, which enables zero-shot deployment. In the video below, the robot hasnt seen these doors before.