John Paulett
banner
johnpaulett.com
John Paulett
@johnpaulett.com
Engineering Leader & Architect @hoppr.ai

#radsky #medsky
100% on having tooling (linting/types/tests) helping out, though I'll also have Claude add those in.

I've found increasing if Claude doesn't work, to wipe out the changes and work on improving my prompt.

www.john-rush.com/posts/ai-202...
Building a Personal AI Factory (July 2025 snapshot)
Multiple parallel Claude-Code sessions power a self-improving AI factory where agents write, review, and refine code.
www.john-rush.com
July 14, 2025 at 3:44 PM
4. I like ChatGPT Codex running on OpenAI's servers, as it allows me to parallelize my work. If Claude had this feature, I'd probably exclusively use it
June 10, 2025 at 1:16 PM
3. AI coding is good at those tasks you think take 10 minutes but end up spiraling into multiple hours: e.g., bumping versions and dealing with compatibility issues
June 10, 2025 at 1:16 PM
A few takeaways:
1. Invest in your CLAUDE.md or AGENTS.md file to give details about the code, what you expect in general, and how to run testing, type checking, and linting.
2. The more type hints and tests you have, the better, as the agent will use these to verify its work
June 10, 2025 at 1:16 PM
After several hours, I upgraded to the lowest Claude Max tier as my Opus tokens were used up, and I did not want to stop progressing.
June 10, 2025 at 1:16 PM
In 3-4 hours, I got done what would have taken me 3-4 full days -- most of this work is not rocket science or even the interesting parts of a project (wiring up whatever build system is currently popular, getting all the correct dependency versions).
June 10, 2025 at 1:16 PM
🧠 Claude Code 🧠, which is now included in the $20/mo Pro plan, is like working with a good mid-level developer. Claude took my prompt, implemented multiple subsystems in parallel, and gave me a working prototype. I spent a few more rounds with Claude adding additional functionality.
June 10, 2025 at 1:16 PM
I then spent all Sunday morning building a complex prompt describing a prototype. Codex got the basic project structure set, but instead of implementing most functionality, just stubbed it out. A few repeated attempts to build upon that base with smaller parts of my prompt were not getting far.
June 10, 2025 at 1:16 PM
My Codex team completed 14 Pull Requests: from adding support for newer Python versions, converting to use uv+ruff, to finding a very subtle bug, to correcting longstanding typos.
June 10, 2025 at 1:16 PM
🔧 ChatGPT Codex 🔧 runs OpenAI Codex in containers on OpenAI's servers and is now available with ChatGPT Plus ($20/mo). It is like having a team of junior developers. I give them tasks, they do the work, test and type check. I provide some feedback, and they open GitHub Pull Requests.
June 10, 2025 at 1:16 PM
My background: I am a long-time GitHub Copilot and continue.dev user (with multiple LLMs, recently Claude).
Continue
Amplified developers, AI-enhanced development · The leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside the IDE
continue.dev
June 10, 2025 at 1:16 PM
I have no idea what the context is, but you're making me hungry.
May 14, 2025 at 11:52 PM
Just rebuilt a new EC2 VM instead of trying to attach the volume to a working VM and fix in a chroot
May 2, 2025 at 1:40 PM
DeekSeek built a reasoning LLM using reinforcement learning, minimal labeled data, and auto-verification. Key innovation: R1-Zero proves high-quality reasoning possible without massive supervised datasets.
newsletter.languagemodels.co/p/the-illust...
The Illustrated DeepSeek-R1
A recipe for reasoning LLMs
newsletter.languagemodels.co
January 28, 2025 at 2:12 PM
Deep dive on NVDA: Despite AI boom, major threats emerge from innovative hardware (Cerebras, Groq), custom silicon (big tech), software alternatives (MLX, Triton), and DeepSeek's 45x efficiency gains. Risks may not be priced in @ 20x sales & 75% margins
youtubetranscriptoptimizer.com/blog/05_the_...
The Short Case for Nvidia Stock
All the reasons why Nvidia will have a very hard time living up to the currently lofty expectations of the market.
youtubetranscriptoptimizer.com
January 28, 2025 at 2:01 PM
HuggingFace: "We are reproducing the full DeepSeek R1 data and training pipeline so everybody can use their recipe. Instead of doing it in secret we can do it together in the open!"
x.com/_lewtun/stat...
x.com
x.com
January 26, 2025 at 2:53 AM
"R1 distillations are going to hit us every few days - because it's ridiculously easy (<$400, <48hrs) to improve any base model with these chains of thought eg with Sky-T1"
news.ycombinator.com/item?id=4282...
we've been tracking the deepseek threads extensively in LS. related reads: - i c... | Hacker News
news.ycombinator.com
January 26, 2025 at 2:53 AM