No holy wars needed.
Full comparison with examples in Podo Stack:
podostack.com 🛠️
No holy wars needed.
Full comparison with examples in Podo Stack:
podostack.com 🛠️
Terraform: your team knows HCL, you don't run Kubernetes everywhere, you want mature providers and a huge ecosystem.
Crossplane: you're building a platform, you want self-service infra, you already live in Kubernetes, you need drift detection.
It's a maturity question
Terraform: your team knows HCL, you don't run Kubernetes everywhere, you want mature providers and a huge ecosystem.
Crossplane: you're building a platform, you want self-service infra, you already live in Kubernetes, you need drift detection.
It's a maturity question
You define an abstraction — say, "Database" — that bundles an RDS instance, security group, IAM role, and parameter group.
Your dev writes 10 lines of YAML. The platform handles the rest.
That's a golden path for infrastructure.
You define an abstraction — say, "Database" — that bundles an RDS instance, security group, IAM role, and parameter group.
Your dev writes 10 lines of YAML. The platform handles the rest.
That's a golden path for infrastructure.
You declare desired state in YAML. A controller reconciles continuously. If someone deletes the S3 bucket from the console — Crossplane recreates it. Automatically.
It's not plan-apply. It's declare-and-forget.
You declare desired state in YAML. A controller reconciles continuously. If someone deletes the S3 bucket from the console — Crossplane recreates it. Automatically.
It's not plan-apply. It's declare-and-forget.
Write HCL. Run plan. Review the diff. Run apply. Done.
But between applies? Nothing watches. Someone changes the resource manually in the console — Terraform doesn't know until your next plan.
It's imperative dressed up as declarative.
Write HCL. Run plan. Review the diff. Run apply. Done.
But between applies? Nothing watches. Someone changes the resource manually in the console — Terraform doesn't know until your next plan.
It's imperative dressed up as declarative.
And honestly? That matters more.
Full breakdown on catalogs, golden paths, and guardrails in Podo Stack:
podostack.com 🛠️
And honestly? That matters more.
Full breakdown on catalogs, golden paths, and guardrails in Podo Stack:
podostack.com 🛠️
The real magic is the Scaffolder. Golden Path templates that spin up a new service with:
- Repo created
- CI pipeline configured
- Monitoring wired
- catalog-info.yaml already there
Day one, your service exists in the catalog. Not day thirty.
The real magic is the Scaffolder. Golden Path templates that spin up a new service with:
- Repo created
- CI pipeline configured
- Monitoring wired
- catalog-info.yaml already there
Day one, your service exists in the catalog. Not day thirty.
One file: catalog-info.yaml. Lives in your repo. Right next to the code.
You change the service, you update the metadata. It's version-controlled. It's reviewable. It's real.
Metadata-as-code
One file: catalog-info.yaml. Lives in your repo. Right next to the code.
You change the service, you update the metadata. It's version-controlled. It's reviewable. It's real.
Metadata-as-code
"Let's put it in Confluence"
"Let's build a spreadsheet"
"Let's tag everything in our CMDB"
6 months later: 40% of entries are outdated, nobody trusts the data, and on-call still asks "who owns this?"
Sound familiar?
"Let's put it in Confluence"
"Let's build a spreadsheet"
"Let's tag everything in our CMDB"
6 months later: 40% of entries are outdated, nobody trusts the data, and on-call still asks "who owns this?"
Sound familiar?
"I trust you to ship fast because the platform won't let you break prod."
Full breakdown with policies and examples in this week's Podo Stack:
podostack.com 🛠️
"I trust you to ship fast because the platform won't let you break prod."
Full breakdown with policies and examples in this week's Podo Stack:
podostack.com 🛠️
Start with 80% soft guardrails. Audit mode. Warnings. Slack notifications.
Then watch what people actually do wrong. THEN enforce.
Going straight to hard blocks on day one? That's how you get a revolt and a shadow platform next door.
Start with 80% soft guardrails. Audit mode. Warnings. Slack notifications.
Then watch what people actually do wrong. THEN enforce.
Going straight to hard blocks on day one? That's how you get a revolt and a shadow platform next door.
Design Time — IDE flags the mistake before you even commit
Deploy Time — CI + OPA reject the bad config at the pipeline
Runtime — Kyverno catches what slipped through at the API server
Stack them. Don't pick one.
Design Time — IDE flags the mistake before you even commit
Deploy Time — CI + OPA reject the bad config at the pipeline
Runtime — Kyverno catches what slipped through at the API server
Stack them. Don't pick one.
A PDF that says "don't drive off the cliff" is documentation.
A metal barrier on the edge is a guardrail.
Gates stop you and ask for permission.
Guardrails let you move fast — but won't let you fall off.
That's the difference in platform engineering too.
A PDF that says "don't drive off the cliff" is documentation.
A metal barrier on the edge is a guardrail.
Gates stop you and ask for permission.
Guardrails let you move fast — but won't let you fall off.
That's the difference in platform engineering too.
- kube-proxy → Cilium
- Sidecars → Cilium/Ambient
- Observability → Pixie
- Security → Falco
The kernel is the new platform.
Full issue: podostack.substack.com/p/lazy-pull-smart-scale-ebpf-network
- kube-proxy → Cilium
- Sidecars → Cilium/Ambient
- Observability → Pixie
- Security → Falco
The kernel is the new platform.
Full issue: podostack.substack.com/p/lazy-pull-smart-scale-ebpf-network
Instead of walking chains:
- Hash map lookup: O(1)
- Direct packet steering
- No iptables touch
One flag:
--set kubeProxyReplacement=true
Instead of walking chains:
- Hash map lookup: O(1)
- Direct packet steering
- No iptables touch
One flag:
--set kubeProxyReplacement=true
It's a firewall, not a load balancer.
kube-proxy hacks it into one.
Result: CPU spikes during updates, latency at scale, lost source IPs.
It's a firewall, not a load balancer.
kube-proxy hacks it into one.
Result: CPU spikes during updates, latency at scale, lost source IPs.
The sidecar tax was always the biggest complaint. Now it's optional.
Full comparison in Podo Stack 👇
podostack.substack.com 🛠️
The sidecar tax was always the biggest complaint. Now it's optional.
Full comparison in Podo Stack 👇
podostack.substack.com 🛠️
- You need L7 control on every pod
- Your team knows the debugging patterns
- You're already running it successfully
When to pick ambient:
- Memory is tight
- You're starting fresh
- You want gradual migration
- You need L7 control on every pod
- Your team knows the debugging patterns
- You're already running it successfully
When to pick ambient:
- Memory is tight
- You're starting fresh
- You want gradual migration
ztunnel: L4 proxy, one per node (~20MB)
Waypoint: L7 proxy, on-demand
You get mTLS everywhere.
You pay for L7 only where you need it.
"Service mesh à la carte."
ztunnel: L4 proxy, one per node (~20MB)
Waypoint: L7 proxy, on-demand
You get mTLS everywhere.
You pay for L7 only where you need it.
"Service mesh à la carte."
Every pod gets an Envoy proxy.
Full L7 control everywhere.
50-100MB RAM overhead per pod.
Startup latency: sidecar must init first.
It works. It's proven. It's expensive.
Every pod gets an Envoy proxy.
Full L7 control everywhere.
50-100MB RAM overhead per pod.
Startup latency: sidecar must init first.
It works. It's proven. It's expensive.