❤️ JS, build tools, scale
AI agents are very useful. But creation itself is irreplaceable.
If you always delegate creation, you lose mechanical sympathy, and eventually effectiveness.
Question: how to preserve that effectiveness while still getting the most from agents.
AI agents are very useful. But creation itself is irreplaceable.
If you always delegate creation, you lose mechanical sympathy, and eventually effectiveness.
Question: how to preserve that effectiveness while still getting the most from agents.
If engs mostly review and rarely create, they become managers.
And managers, no matter how capable, eventually lose the ability to intuitively understand the system. They lose mechanical sympathy.
If engs mostly review and rarely create, they become managers.
And managers, no matter how capable, eventually lose the ability to intuitively understand the system. They lose mechanical sympathy.
The boundary is fuzzy and domain-specific.
I think there’s no downside to having AI review your work, no matter how complex the domain.
The boundary is fuzzy and domain-specific.
I think there’s no downside to having AI review your work, no matter how complex the domain.
Several times I just rewrote things from scratch and produced both a better solution and a clearer mental model.
Several times I just rewrote things from scratch and produced both a better solution and a clearer mental model.
(2) can be an option, but problems surface. And they don't surface right away. They appear months later.
The issue isn't that the written code is broken, but it isn't exactly right. The solution doesn't fit well.
(2) can be an option, but problems surface. And they don't surface right away. They appear months later.
The issue isn't that the written code is broken, but it isn't exactly right. The solution doesn't fit well.
- The agent works independently (vibe).
- The agent drives, I navigate (short leash).
- I drive, the agent reviews.
- The agent works independently (vibe).
- The agent drives, I navigate (short leash).
- I drive, the agent reviews.
Sometimes that deep understanding isn’t essential (e.g., demos, scripts, CRUD).
But in other contexts (distributed systems, performance-critical work), it’s vital. The tradeoffs depend on the domain, and your approach to AI has to adapt.
Sometimes that deep understanding isn’t essential (e.g., demos, scripts, CRUD).
But in other contexts (distributed systems, performance-critical work), it’s vital. The tradeoffs depend on the domain, and your approach to AI has to adapt.
The act of writing produces a much deeper understanding than reviewing, no matter how careful the review. This applies to both human-written and AI-written code.
That’s why we ask authors (not reviewers) to debug. Writing forces you to internalize the system.
The act of writing produces a much deeper understanding than reviewing, no matter how careful the review. This applies to both human-written and AI-written code.
That’s why we ask authors (not reviewers) to debug. Writing forces you to internalize the system.
An incomplete understanding of AI-written code that is carefully reviewed can become problematic over time. On several occasions we had to rewrite it from scratch.
An incomplete understanding of AI-written code that is carefully reviewed can become problematic over time. On several occasions we had to rewrite it from scratch.
Cursor, Claude Code--a bit of complexity for a lot of value. Kind of magical.
6 months later: dozens of tools, swarms of agents using different models with separate config files. Nothing composes.
Cursor, Claude Code--a bit of complexity for a lot of value. Kind of magical.
6 months later: dozens of tools, swarms of agents using different models with separate config files. Nothing composes.