I like robots!
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. All we need here is self-supervised data collection and learning.
In the latest episode, I chatted to Prof. Lerrel Pinto (@lerrelpinto.com) from New York University about #robot learning and decision making.
Available wherever you get your podcasts: linktr.ee/robottalkpod
In the latest episode, I chatted to Prof. Lerrel Pinto (@lerrelpinto.com) from New York University about #robot learning and decision making.
Available wherever you get your podcasts: linktr.ee/robottalkpod
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. All we need here is self-supervised data collection and learning.
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. All we need here is self-supervised data collection and learning.
Later this season, I'll be chatting to Prof. Lerrel Pinto (@lerrelpinto.com) from NYU about using machine learning to train robots to adapt to new environments.
Send me your questions for Lerrel: robottalk.org/ask-a-question/
Later this season, I'll be chatting to Prof. Lerrel Pinto (@lerrelpinto.com) from NYU about using machine learning to train robots to adapt to new environments.
Send me your questions for Lerrel: robottalk.org/ask-a-question/
1. RGBD + Pose data
2. Audio from the mic or custom contact microphones
3. Seamless Bluetooth integration for external sensors
1. RGBD + Pose data
2. Audio from the mic or custom contact microphones
3. Seamless Bluetooth integration for external sensors
And it works! Higher performance on HumanEval, MBPP, and CodeContests across small LMs like Gemma-2, Phi-3, Llama 3.1
We show that order matters in code gen. -- casting code synthesis as a sequential edit problem by preprocessing examples in SFT data improves LM test-time scaling laws
And it works! Higher performance on HumanEval, MBPP, and CodeContests across small LMs like Gemma-2, Phi-3, Llama 3.1
Check out the thread from @gaoyuezhou.bsky.social for more details.
We believe the true potential of world models lies in enabling agents to reason at test time.
Introducing DINO-WM: World Models on Pre-trained Visual Features for Zero-shot Planning.
Check out the thread from @gaoyuezhou.bsky.social for more details.
Less than $450 and fully open-source 🤯
by @huggingface, @therobotstudio, @NepYope
This tendon-driven technology will disrupt robotics! Retweet to accelerate its democratization 🚀
A thread 🧵
Less than $450 and fully open-source 🤯
by @huggingface, @therobotstudio, @NepYope
This tendon-driven technology will disrupt robotics! Retweet to accelerate its democratization 🚀
A thread 🧵
DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control @jeffacce.bsky.social @lerrelpinto.com
DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control @jeffacce.bsky.social @lerrelpinto.com
ironj.github.io/eleuther/
ironj.github.io/eleuther/
We call this method Prescriptive Point Priors for robot Policies or P3-PO in short. Full project is here: point-priors.github.io
We call this method Prescriptive Point Priors for robot Policies or P3-PO in short. Full project is here: point-priors.github.io
BAKU is modular, language-conditioned, compatible with multiple sensor streams & action multi-modality, and importantly fully open-source!
BAKU is modular, language-conditioned, compatible with multiple sensor streams & action multi-modality, and importantly fully open-source!
To start of, Robot Utility Models, which enables zero-shot deployment. In the video below, the robot hasnt seen these doors before.