Bob Bui
bobbui.bsky.social
Bob Bui
@bobbui.bsky.social
Architecting large-scale AI at work | After hours: building an opinionated AI chat stack
bobbui.com
A coding agent for $1.30. Not per hour—per month. Then ~¥40/mo (~$5–6). I pay $20 for GPT/Claude; Copilot is ~$10–19. At $6, “good enough” clears my bar—I’d pay. Would you still pay $20?
November 12, 2025 at 9:09 PM
Vibe coding is Collins' Word of the Year. The boom is real. In just 8 months, Lovable reach $100M ARR and Base44 exit with $80M buyouts. But not all is rainbows and unicorns, the scars are louder. My prediction next wave: vibe debugging. What your take on the vibe coding trend?
November 7, 2025 at 11:30 PM
Vibe-code at a big co? Yes, i did it all the time, but if I can’t flag it, kill it fast, or cap blast radius, I don’t start. Safe: spikes, dashboards, copy/layout, glue. No-go: auth, schemas+migrations, security, ext APIs. Comment your safe/no-go list? Folow me for upcoming AI coding playbook.
November 7, 2025 at 2:18 AM
9:12 a.m. everything froze. Not our stack—AWS us-east-1. I knew the Bedrock constraints; my levers were limited. AWS should default to multi-region, publish routing maps, hit parity, and keep admin access up.
November 6, 2025 at 3:36 AM
OpenAI is now a for-profit PBC under a nonprofit parent. Rough split: foundation ~26%, Microsoft ~27%, others ~47%. With a $250B Azure commit + $25B pledge, expect higher prices as frontier AI costs mount. Plan for annual bumps and paywalled features.
Full article here: www.bbc.com/news/art...
November 5, 2025 at 5:35 PM
The debate is model quality. The bottleneck is capacity. We saw ~10k throttles in 5 min on Claude via AWS.
Reports: Anthropic in talks with Google to tap more TPU capacity.
Until then, ship with ops: budgets, backoff+jitter, degrade, breakers, cache.
Discussion here: www.linkedin.com/pos...
November 5, 2025 at 10:13 AM
Vibe-coding tip: after 3 hours stuck, I guilt-tripped the AI—“my family will starve”—and it finally shipped a fix.
November 5, 2025 at 10:09 AM
Specialized beats general. Cursor shipped Composer and a multi-agent UI. By their numbers, it's ~4× faster and finishes most coding turns under 30s. The moat isn't a wrapper. It's owning the loop: model, runtime, editor. Where are LLM wrappers still defensible?
1/2
November 4, 2025 at 2:03 PM