I want to help teams be effective, fun places - not feature factories.
The tool matters less than how you use it.
The tool matters less than how you use it.
More effective than models without it? Sure. But it's still a stochastic parrot.
More effective than models without it? Sure. But it's still a stochastic parrot.
Sound familiar? The problem isn’t Planning—it’s what happened before you walked in.
Sound familiar? The problem isn’t Planning—it’s what happened before you walked in.
Some teams think TDD is the fix. They're solving the wrong problem.
Some teams think TDD is the fix. They're solving the wrong problem.
The Spotify Model was a snapshot of one company's culture — autonomy, psychological safety, continuous improvement. Most adoptions skip to the labels and ignore the hard work.
Structure without culture is just a reorg with fancier names.
The Spotify Model was a snapshot of one company's culture — autonomy, psychological safety, continuous improvement. Most adoptions skip to the labels and ignore the hard work.
Structure without culture is just a reorg with fancier names.
The data elsewhere:
- 1.7x more correctness issues
- 1.64x worse maintainability
- 1.57x more security issues
- Only win? Spelling.
The data elsewhere:
- 1.7x more correctness issues
- 1.64x worse maintainability
- 1.57x more security issues
- Only win? Spelling.
Research on AI-generated code:
→ 1.7x more issues
→ 30-41% more tech debt
→ 39% more complexity
→ Speed gains disappear in months
We're building the wrong thing faster and calling it productivity.
Research on AI-generated code:
→ 1.7x more issues
→ 30-41% more tech debt
→ 39% more complexity
→ Speed gains disappear in months
We're building the wrong thing faster and calling it productivity.
We're generating more code AND more technical debt. When we 10x code volume but 20x defects, we're not winning—we're compounding problems.
We're generating more code AND more technical debt. When we 10x code volume but 20x defects, we're not winning—we're compounding problems.
The obvious fix? Better planning. More process.
The actual root cause? "The system keeps trying to solve a structure problem with a process problem."
The obvious fix? Better planning. More process.
The actual root cause? "The system keeps trying to solve a structure problem with a process problem."
Cortex: PRs up 20%, incidents up 23.5%, failure rate up 30%. No data shared.
buff.ly/Q40cJFD
Cortex: PRs up 20%, incidents up 23.5%, failure rate up 30%. No data shared.
buff.ly/Q40cJFD
Travel agents got unbundled into separate booking apps. Stockbrokers into trading platforms + robo-advisors. 🧵
Travel agents got unbundled into separate booking apps. Stockbrokers into trading platforms + robo-advisors. 🧵
Same tech, wildly different outcomes. 🧵
Same tech, wildly different outcomes. 🧵
They're not trying make perfect predictions, instead run a range of scenarios and see what happens. Based on the past few weeks, I think that OpenAI is in trouble.
....
They're not trying make perfect predictions, instead run a range of scenarios and see what happens. Based on the past few weeks, I think that OpenAI is in trouble.
....
I built a Claude skill to save some effort: instead of memorizing a framework, you tell the AI your story.
I built a Claude skill to save some effort: instead of memorizing a framework, you tell the AI your story.
For 10+ years, I've encouraged ScrumMasters to use Systems Thinking for deeper problem understanding.
For 10+ years, I've encouraged ScrumMasters to use Systems Thinking for deeper problem understanding.
Realization: AI automation creates ongoing "cognitive tax"—carefully checking results every run. Instead, have AI write code once. Work harder upfront, but run safely afterward. Failures become self-evident.
Realization: AI automation creates ongoing "cognitive tax"—carefully checking results every run. Instead, have AI write code once. Work harder upfront, but run safely afterward. Failures become self-evident.
Working in small chunks, using BDD/TDD like approaches - I'm getting good results for fewer tokens. ...
Working in small chunks, using BDD/TDD like approaches - I'm getting good results for fewer tokens. ...
- More work piling up in front of QA
- Increased importance of clear 'specs'
- More technical debt
- Increased security risks
- More work piling up in front of QA
- Increased importance of clear 'specs'
- More technical debt
- Increased security risks
LLMs predict tokens—they don't understand context. That confidence is a training artifact, not intelligence. It's a trap. The model can't tell if it should 95% confident or 25%...
LLMs predict tokens—they don't understand context. That confidence is a training artifact, not intelligence. It's a trap. The model can't tell if it should 95% confident or 25%...