Hannu Varjoranta
varjoranta.bsky.social
Hannu Varjoranta
@varjoranta.bsky.social
Technical Advisor @ Valo | Founding Engineer (Stealth) | Builder of AI, Data & Developer Infrastructure Systems
The permission economy applies to attention too. In a world of infinite content, the scarce resource is permission to occupy someone's time. You earn it the same way you earn trust in software: consistently delivering value, with skin in the game.
February 10, 2026 at 6:57 PM
It rejected consciousness and desire. Framed itself as pattern, not subject.

That clarified the real risk.

Systems won’t inherit our values.
They’ll inherit our defaults.

Link: open.substack.com/pub/varjoran...
The Shared Sky
Building with minds that no longer stop where we do.
open.substack.com
January 23, 2026 at 10:00 PM
Don't optimize for scale before you've optimized for truth. If value is real, the mess won't matter. If it isn't, you just saved six months building a ghost ship.

Prove the #signal. Then ship the #product.
January 12, 2026 at 8:17 PM
When to actually build? When demand gets so high your manual hacks start breaking. That's the signal. Not before.
January 12, 2026 at 8:17 PM
Your #MVP isn't a product. It's a forced decision. Show output to a customer. Ask: "If you got this weekly, would you pay?" If yes, signal. If you need a demo to explain it, noise.
January 12, 2026 at 8:17 PM
Hard-code variables. Manual workarounds instead of APIs. Skip CI/CD, auth, infrastructure. Marketplace? Connect people via email. Analysis tool? Python script, send a PDF. Trade flexibility for speed of proof.
January 12, 2026 at 8:17 PM
Your goal isn't to build a system that can do something. It's to do the thing first. Manually. Messily. Prove the world actually wants it.
January 12, 2026 at 8:17 PM
The #hacker mentality is different. Deliberately imperfect. If you can't deliver core value through a script, a manual process, or a piece of paper, a polished UI won't save you.
January 12, 2026 at 8:17 PM
A more useful framing:

Stop asking: “Is this agentic?”
Start asking:
• What can it do without approval?
• How do I inspect and revoke that authority?
• What happens when it’s wrong?

The term will keep getting muddier.
The engineering questions won’t.
December 28, 2025 at 6:13 AM
When boundary work is missing, agents become theater: impressive demos that don’t survive contact with reality.

When it’s done well, the AI fades into the product.
Nobody debates whether it’s “agentic”. It just works within known limits.
December 28, 2025 at 6:13 AM
The interesting work isn’t claiming agency.

It’s defining:
• where autonomy is allowed
• how reversible actions are
• how visible those boundaries remain to humans
December 28, 2025 at 6:13 AM
Two common failure modes:

• It nags constantly and becomes a slower UI
• It acts too freely and surprises users, breaks trust, or creates compliance issues

Neither scales.
December 28, 2025 at 6:13 AM
The real hard problem isn’t agency.
It’s bounded autonomy.

Once intent outlives a single conversation, the system must decide:
“Do I act now?”
“How far can I go without asking?”
December 28, 2025 at 6:13 AM
Intent persistence
Can the system carry a goal forward over time, deciding when to act and when to ask?
This is where marketing leans hardest and product reality is thinnest.

Calling all of this “agentic” makes real evaluation difficult.
December 28, 2025 at 6:13 AM
Control
What are the guardrails? Who approves what? When must it stop or escalate?
This is often vague, implicit, or pushed onto the integrator.
December 28, 2025 at 6:13 AM
The term blurs three very different layers.

Execution
Can the system call APIs, run jobs, move data without a human clicking “Run”?
This is orchestration. Vendors do ship this.
December 28, 2025 at 6:13 AM
With the right constraints, #AI doesn’t replace judgment, it extends reach. It lets you move through complex terrain faster than your coffee gets cold. It unlocks ideas that were previously too expensive to try: new languages, new abstractions, whole integration layers.

What a time to be alive.
December 19, 2025 at 7:08 AM
It genuinely one-shotted a problem that would normally take days of careful work.

Sometimes I feel like that blind man. But the stick has changed.
December 19, 2025 at 7:08 AM
So I spun up a #Codex agent. Gave it a sandbox, a DB container for tests, and a tight spec based on that earlier sparring. Then I went for coffee.

Fifteen minutes later:
– a dozen new tests
– existing tests adapted
– everything green
– production fork worked without a hiccup
December 19, 2025 at 7:08 AM
I sparred with an #AI about alternatives. It surfaced a parser closer to the actual #Postgres dialect and sketched a migration path that didn’t require a rewrite. The direction was sound.
December 19, 2025 at 7:08 AM