Victor Savkin
banner
victorsavkin.bsky.social
Victor Savkin
@victorsavkin.bsky.social
🇨🇦 Creator of Nx, Co-founder and CTO of @nx.dev

❤️ JS, build tools, scale
10/10 Summary:

AI agents are very useful. But creation itself is irreplaceable.

If you always delegate creation, you lose mechanical sympathy, and eventually effectiveness.

Question: how to preserve that effectiveness while still getting the most from agents.
September 25, 2025 at 3:33 PM
9/10 The longer-term risk is erosion of expertise.

If engs mostly review and rarely create, they become managers.

And managers, no matter how capable, eventually lose the ability to intuitively understand the system. They lose mechanical sympathy.
September 25, 2025 at 3:33 PM
8/10 Question: When should AI drive, and when should it only review?

The boundary is fuzzy and domain-specific.

I think there’s no downside to having AI review your work, no matter how complex the domain.
September 25, 2025 at 3:33 PM
7/10 It's because I wasn't able to engage with it deeply when producing it and I didn't have a good understanding of different options and edge cases.

Several times I just rewrote things from scratch and produced both a better solution and a clearer mental model.
September 25, 2025 at 3:33 PM
6/10 In complex domains, (1) isn't an option.

(2) can be an option, but problems surface. And they don't surface right away. They appear months later.

The issue isn't that the written code is broken, but it isn't exactly right. The solution doesn't fit well.
September 25, 2025 at 3:33 PM
5/10 I use AI agents in three ways:
- The agent works independently (vibe).
- The agent drives, I navigate (short leash).
- I drive, the agent reviews.
September 25, 2025 at 3:33 PM
4/10 Observation 2
Sometimes that deep understanding isn’t essential (e.g., demos, scripts, CRUD).

But in other contexts (distributed systems, performance-critical work), it’s vital. The tradeoffs depend on the domain, and your approach to AI has to adapt.
September 25, 2025 at 3:33 PM
3/10 Observation 1:
The act of writing produces a much deeper understanding than reviewing, no matter how careful the review. This applies to both human-written and AI-written code.

That’s why we ask authors (not reviewers) to debug. Writing forces you to internalize the system.
September 25, 2025 at 3:33 PM
2/10 Many of us have observed the same challenge:

An incomplete understanding of AI-written code that is carefully reviewed can become problematic over time. On several occasions we had to rewrite it from scratch.
September 25, 2025 at 3:33 PM
3/ The original simplicity that made the idea valuable gets buried by clever engineering.
August 21, 2025 at 2:20 PM
2/ You can see it very clearly in AI tooling.

Cursor, Claude Code--a bit of complexity for a lot of value. Kind of magical.

6 months later: dozens of tools, swarms of agents using different models with separate config files. Nothing composes.
August 21, 2025 at 2:20 PM