Mike Lyndon
banner
mikelyndon.online
Mike Lyndon
@mikelyndon.online
Follow your excitement. Law of attraction.
🐬Cetaceans🐋 A.I. Engineering. GameDev. VFX. Reading fantasy, sci-fi, self-development.
You’ve changed.
October 9, 2025 at 5:12 PM
Maybe I’m projecting but, how have you picked directions in the past? What caused you to dive into react or bsky or write longform (and greatly appreciated) posts recently? I have to imagine there’s an emotional force, a passion/joy that guides some of that. What about that energy?
May 10, 2025 at 11:47 PM
I’m getting a ton of value from the posts but it’s a slog. Because they’re long I break it up into sessions but the structure and style requires maintaining some context and history of the changes. Sometimes I come back and I’m lost requiring rereading earlier sections.
April 23, 2025 at 1:30 AM
We live in a world where technology is already helping us achieve tasks faster and easier than ever before. We can order groceries for the week from our laptop in minutes. No agent needed.
January 24, 2025 at 3:50 PM
As an FX artist who used to spend days crafting fluid simulations (fire, water, smoke), the gen AI stuff often falls short. The coke ad always make me cringe. But, this… This, is really good.
January 16, 2025 at 1:06 AM
I’d argue 90% of the time an ‘agentic’ framework isn’t needed. I had to learn the basics about orchestration and distributed systems to realize that.
January 14, 2025 at 2:57 PM
The initial appeal was easily constructing/modifying a graph. Define your fn’s, define the flow/relationships.
It took time for me to understand agentic flows, thinking complex systems needed a special framework.
January 14, 2025 at 2:57 PM
“According to EPRI, a single ChatGPT query requires around 2.9 watt-hours, compared to just 0.3 watt-hours for a Google search, driving a potential order of magnitude more power demand.” about.bnef.com/blog/liebrei...
Liebreich: Generative AI – The Power and the Glory | BloombergNEF
This year will go down in history as the year the energy sector woke up to AI. This is also the year AI woke up to energy. Is the data center power frenzy just the latest of a long line of energy sect...
about.bnef.com
January 12, 2025 at 5:06 PM
From the article:
Several screenwriters who’ve worked for the streamer told me a common note from company executives is “have this character announce what they’re doing so that viewers who have this program on in the background can follow along.”
January 4, 2025 at 2:21 AM
I think this is why some agentic workflows produce seemingly novel results - they’re designed to iteratively explore until a condition is met. We don’t necessarily do that when chatting with an llm.
January 3, 2025 at 8:37 PM
My take on this is that we’re iteratively exploring the latent space. The initial prompt and response puts us in the ball park. By asking “write better code” we’re probing the neighbouring, high dimensional space.
January 3, 2025 at 8:35 PM
I copied the files into Claude, laid out my steps and had a mostly working refactor in seconds. What would have been a half day or more was done. But that tension still lingers. Anyone else feel this?
December 28, 2024 at 7:57 PM
I think htmx and alpinejs are good additions. Lightweight, inline, llm-friendly.
December 23, 2024 at 10:22 PM
Is there any way we can see some of these experiments? I’d love to get a sense of the the problem statements, how they’re structured, and what makes for a good response. I’m not an academic but I feel like the methodology could be transferable.
December 12, 2024 at 6:54 PM
At first I wasn’t sure what Command offered over conditional edges but after reading docs it makes sense. BUT, I then discovered interrupt() and that just feels like a minefield of footguns. I’ll likely put it in the wrong place in a node. Or worse, run into issues with tracking multiple interrupts.
December 12, 2024 at 2:21 AM