Fredrik Sundmyhr
fsundmyhr.bsky.social
Fredrik Sundmyhr
@fsundmyhr.bsky.social
Early finding: batching AI interactions helps more than spreading them through the day. Reduces context switching, keeps flow longer.

The productivity numbers look great on paper. The sustainability picture is hazier.

More on this if the experiments pan out.
November 26, 2025 at 5:53 PM
Testing now: dedicated review blocks instead of continuous evaluation, stricter prompt templates that force edge case thinking, clearer protocols for when to use AI vs. when to just code.
November 26, 2025 at 5:53 PM
We shipped three features last month that would've taken six weeks pre-AI. The speed is real. So is the fatigue.
November 26, 2025 at 5:53 PM
Architectural decisions: increasingly optimized for AI, not humans. Flatter structures so AI can parse dependencies. More explicit naming. Comments written for AI tools. Some of this helps. Some is technical debt we're knowingly taking on.
November 26, 2025 at 5:53 PM
Review time: up 35%. AI writes functional code fast. Understanding why it made those choices, catching edge cases it missed, tracing dependencies it didn't flag. That takes time human code doesn't.
November 26, 2025 at 5:53 PM
Context switching: up 40%. Every AI suggestion requires a decision. Accept, modify, reject. Pull out of flow, evaluate, get back in. Twenty micro-decisions per hour adds up.
November 26, 2025 at 5:53 PM
I ran the numbers across Bettershore, Kontrava, and our other companies. Development velocity up 60% since we went AI-first. But I kept hearing about this exhaustion. So I tracked what nobody measures: the cognitive overhead.
November 26, 2025 at 5:53 PM
The founders who post daily wins aren't showing you the invoice reviews. The prompt iterations. The contract line items. The repetitive infrastructure that keeps everything running.

Still early. But the resistance is quieter than it used to be.
November 25, 2025 at 9:02 PM
Last week I spent three hours reconciling accounts. Didn't resist it. Just did the reps.

That's new.

Physical training is quietly rewiring my tolerance for professional grind. Teaching my brain that repetition isn't something to escape. It's where the actual work happens.
November 25, 2025 at 9:02 PM
I used to fight this. Needed the work to be interesting to show up for it.

Six months into CrossFit, something shifted.

CrossFit is nothing but repetition. Same movements. Same count. Same grind. You don't skip to the interesting part. You just do the reps.
November 25, 2025 at 9:02 PM
The interesting work is maybe 20%. The other 80% is the same tasks, repeated. Reviewing AI agent outputs for the third time this week. Proposal templates that look like the last twelve. The weekly finance review that's never exciting but always necessary.
November 25, 2025 at 9:02 PM
Still working out how retrieval ranks relevance and how we version control answer evolution. But watching the same question go from two hours to thirty seconds across three weeks—that's the unlock.
November 24, 2025 at 8:02 PM
We're running this across Taktway, BetterShore, Guidaro, Norden IT now. Same engine, same knowledge base. Every answer teaching the system what prospects actually care about.
November 24, 2025 at 8:02 PM
When questions come in, the system finds previous answers we've written. We refine them. The next time someone asks, they get the improved version. The knowledge compounds across every proposal we send.
November 24, 2025 at 8:02 PM
This is Project Phoenix. We create proposals in Markdown through conversational work with AI—scope, pricing tiers, risks, delivery. The Markdown renders as interactive web pages instead of PDFs. Clients comment inline, ask questions, highlight sections. We track what they read and when.
November 24, 2025 at 8:02 PM
Third prospect asked last Thursday at 11pm. The system surfaced both previous answers. I saw the evolution, approved the refined version, sent it. Thirty seconds.
November 24, 2025 at 8:02 PM
Second prospect asked November 7th. I pulled up the first answer, realized I'd missed the guardrails—how we prevent AI from breaking production, the rollback protocols, what humans must approve versus what runs autonomously. Added those. Refined the language. Twenty minutes.
November 24, 2025 at 8:02 PM
Explained our time zone advantage, the human approval gates, how we tolerate crashes that would be unacceptable from traditional software. Two hours of work for three paragraphs.
November 24, 2025 at 8:02 PM
The question: "How do you ship features so fast with AI?" First prospect asked it October 28th. I documented our overnight development cycle—AI agents write code while we sleep, testers in Canada document bugs by morning, developers in Norway review and approve changes by afternoon.
November 24, 2025 at 8:02 PM
Still testing how this scales beyond code. The documentation might drift without active maintenance. But tribal knowledge has always been the vulnerability.

AI just makes proper documentation unavoidable if you want efficiency.
November 23, 2025 at 6:50 PM
When an agent finishes a feature now, documenting what it did and why is part of completing the work. The documentation stays current because it's required for the AI to function, not because someone decided documentation was important.
November 23, 2025 at 6:50 PM
The new developer had the same context the agents use. They understood why files were named what they were named. How the application logic fit together. What the domain rules actually meant. The knowledge that used to live in senior developers' heads was in the system.
November 23, 2025 at 6:50 PM