Chris Paxton
cpaxton.bsky.social
Chris Paxton
@cpaxton.bsky.social
AI, robotics, and other stuff. Currently AI @ agility robotics

Former Hello Robot, NVIDIA, Meta.

Writing about robots https://itcanthink.substack.com/

All opinions my own
Pinned
Gen2Act shows us how ai video generation can be used to control robots. We talked to Homanga Bharadwaj, one of the authors. Link to podcast with abstracts + project links: robopapers.substack.com/p/ep57-learn...
On X ive apparently made it onto the target lists for a couple Chinese propaganda accounts. Like "tag Chris he'll share it it has cool robots" kind of list. They're usually right
January 7, 2026 at 1:22 PM
Reposted by Chris Paxton
Under greedy/low-temp decoding, reasoning LLMs get stuck in loops repeating themselves, wasting test-time compute and sometimes never terminating!

They find that:

- Low temps => more looping
- Smaller models => more looping
- Harder problems => more looping
January 6, 2026 at 8:18 PM
Reposted by Chris Paxton
Seeing a non-reasoning model suddenly start to reason is what did it for me. It was clear there was more going on under the surface.
I need to fire this model up again sometime. The screenshot is from May '24. It was a 120B llama 3 self-merge that someone uploaded and most wrote off but I stuck with it b/c it seemed to be the only model I could get to arbitrarily rotate ascii art shapes at the time. Then it did stuff like this.
January 6, 2026 at 5:35 PM
Reasoning models ended up completely changing my view of AI, to be honest -- from "this is neat" to "this genuinely changes everything." the difference over the last year has been really incredible.
Last January saying reasoning models will generalize was a hot take.
We've come a long way with GPT Thinking and Claude Opus.
www.interconnects.ai/p/why-reason...
January 6, 2026 at 5:32 PM
Reposted by Chris Paxton
Last January saying reasoning models will generalize was a hot take.
We've come a long way with GPT Thinking and Claude Opus.
www.interconnects.ai/p/why-reason...
January 6, 2026 at 4:37 PM
Reposted by Chris Paxton
So now that ~half of my timeline is declaring claude code AGI, when does OpenAI's commitment to stop competing kick in?
January 6, 2026 at 3:49 PM
Reposted by Chris Paxton
im increasingly self-convinced of the value of diffusion-based models over GPTs
Gen2Act shows us how ai video generation can be used to control robots. We talked to Homanga Bharadwaj, one of the authors. Link to podcast with abstracts + project links: robopapers.substack.com/p/ep57-learn...
January 6, 2026 at 2:54 PM
Gen2Act shows us how ai video generation can be used to control robots. We talked to Homanga Bharadwaj, one of the authors. Link to podcast with abstracts + project links: robopapers.substack.com/p/ep57-learn...
January 6, 2026 at 2:42 PM
Reposted by Chris Paxton
How does the brain control locomotion? In our new preprint, we uncover a brain circuit in Drosophila that controls forward walking independently of turning. This dedicated locomotor circuit enables flexible motor control and might reflect a shared principle across species. doi.org/10.64898/202...
January 5, 2026 at 4:25 PM
Reposted by Chris Paxton
Slowly starting to realize that in just 5-10 years we may actually understand how the brain works.
How does the brain control locomotion? In our new preprint, we uncover a brain circuit in Drosophila that controls forward walking independently of turning. This dedicated locomotor circuit enables flexible motor control and might reflect a shared principle across species. doi.org/10.64898/202...
January 6, 2026 at 5:22 AM
Reposted by Chris Paxton
I’m operating the same way (heavily human-gated), but I don’t think that’s how Boris and other “power” users operate. Anthropic posted about www.anthropic.com/engineering/... and also have a plugin that is a step in that direction: github.com/anthropics/c.... If only I had more time and credits…
Effective harnesses for long-running agents
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
www.anthropic.com
January 6, 2026 at 5:32 AM
Reposted by Chris Paxton
This website has been an experiment run by Harvard to find the one pure person who is allowed to post. The experiment is concluded, thank you for participating.

The person is @gracekind.net
January 6, 2026 at 2:43 AM
this is like textbook singularity stuff isn't it? maybe the 2027 guys were right?
"Correct. In the last thirty days, 100% of my contributions to Claude Code were written by Claude Code"
x.com/bcherny/stat...
January 6, 2026 at 2:49 AM
Reposted by Chris Paxton
"Correct. In the last thirty days, 100% of my contributions to Claude Code were written by Claude Code"
x.com/bcherny/stat...
January 6, 2026 at 1:56 AM
I know this is design for manufacture but the new Atlas looks so dorky
January 6, 2026 at 2:37 AM
Reposted by Chris Paxton
If the DEA had killed 80 innocent Americans in the course of apprehending one drug dealer, there would be riots. But we are so ghoulishly indifferent to the lives and humanity of people abroad that it's barely even part of the conversation.
A reminder that the US killed 80 people in Venezuela, and it would be nice if the US media cared enough to think that the life of a grandmother in Caracas whose building is destroyed by a US bomb matters as much as the life of a person in the US.
January 5, 2026 at 8:57 PM
Reposted by Chris Paxton
Man you know who I’d hate to be right now?? Literally anyone in Taiwan
One of most remarkable US presentations at UN Security Council I've ever seen.

- No reference to UN Charter legal justification
- Claims Panama as precedent (which the UN condemned)
- Energy reserves ⤵️ as justification is illegal
- Sharp contrast with US Ambassador Pickering presentation in 1989
January 5, 2026 at 8:02 PM
Reposted by Chris Paxton
NYT did some reporting on this. OpenAI allowed user preference for responses to influence model training, and user's rate sycophancy higher.

Really dumb mistake, and from what I'm seeing they may have actually swung in the opposite direction these days.

www.nytimes.com/2025/11/23/t...
January 5, 2026 at 8:16 PM
Reposted by Chris Paxton
I've blocked or been blocked to the point my discover feed looks suspiciously like my following feed.
January 5, 2026 at 8:21 PM
Reposted by Chris Paxton
broad brush take, but: it appears that wonder and curiosity are essential elements for not aging into a Bitter Old Person, or keeping a “youthful” mind.
amen, or perhaps another way, the youth will have imagination that should provoke the older and stiffer-minded, as well as the flexible. old folks have knowledge, which can help some repeat errors, but plenty of middle aged and old people can’t imagine for shit, we *need* new minds.
January 5, 2026 at 6:15 PM
people have been saying this my whole life tbh
WARNING TO THE YOUTH: the older you get, the more of your peers start saying “dang, I don’t have the energy or time to be curious” or “I gave up on that stuff”. the gradient seems to be pretty steady across the years, so it adds up as we go.
broad brush take, but: it appears that wonder and curiosity are essential elements for not aging into a Bitter Old Person, or keeping a “youthful” mind.
January 5, 2026 at 7:20 PM
Reposted by Chris Paxton
Physical Intelligence Sweeps Humanoid Olympics
More coming soon
generalrobots.substack.com
January 5, 2026 at 6:27 PM
Left v right has been a classic problem for language models for a long time, no?
mirror test for claude but it's the mirror reversal test

@gracekind.net
January 5, 2026 at 5:46 PM
Reposted by Chris Paxton
OpenML.fyi has been slow but this 👇🏾is still very much true.
January 5, 2026 at 4:03 PM