I like robots!
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. All we need here is self-supervised data collection and learning.
In the latest episode, I chatted to Prof. Lerrel Pinto (@lerrelpinto.com) from New York University about #robot learning and decision making.
Available wherever you get your podcasts: linktr.ee/robottalkpod
In the latest episode, I chatted to Prof. Lerrel Pinto (@lerrelpinto.com) from New York University about #robot learning and decision making.
Available wherever you get your podcasts: linktr.ee/robottalkpod
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. All we need here is self-supervised data collection and learning.
The key technical breakthrough here is that we can control joints and fingertips of the robot **without joint encoders**. All we need here is self-supervised data collection and learning.
Later this season, I'll be chatting to Prof. Lerrel Pinto (@lerrelpinto.com) from NYU about using machine learning to train robots to adapt to new environments.
Send me your questions for Lerrel: robottalk.org/ask-a-question/
Later this season, I'll be chatting to Prof. Lerrel Pinto (@lerrelpinto.com) from NYU about using machine learning to train robots to adapt to new environments.
Send me your questions for Lerrel: robottalk.org/ask-a-question/
1. RGBD + Pose data
2. Audio from the mic or custom contact microphones
3. Seamless Bluetooth integration for external sensors
1. RGBD + Pose data
2. Audio from the mic or custom contact microphones
3. Seamless Bluetooth integration for external sensors
join me
join me
And it works! Higher performance on HumanEval, MBPP, and CodeContests across small LMs like Gemma-2, Phi-3, Llama 3.1
We show that order matters in code gen. -- casting code synthesis as a sequential edit problem by preprocessing examples in SFT data improves LM test-time scaling laws
And it works! Higher performance on HumanEval, MBPP, and CodeContests across small LMs like Gemma-2, Phi-3, Llama 3.1
Check out the thread from @gaoyuezhou.bsky.social for more details.
We believe the true potential of world models lies in enabling agents to reason at test time.
Introducing DINO-WM: World Models on Pre-trained Visual Features for Zero-shot Planning.
Check out the thread from @gaoyuezhou.bsky.social for more details.
I’ll start: my NIH postdoc funding helped me develop and test AI tools that could identify skin cancer across diverse skin tones.
I’ll start: my NIH postdoc funding helped me develop and test AI tools that could identify skin cancer across diverse skin tones.