Nimo πŸ³οΈβ€πŸŒˆ
banner
nimobeeren.com
Nimo πŸ³οΈβ€πŸŒˆ
@nimobeeren.com
he/him

Building cool things with or without AI (mostly with) β€” πŸ§ͺπŸŽΉπŸ’»πŸŒΈπŸ“·πŸ”§πŸ“”

🌐 nimobeeren.com
πŸ“ Eindhoven
Pinned
Finally published strands-solver 1.0.0 πŸš€

It solves NY Times Strands puzzles using a backtracking algorithm and word embeddings.

Try it with `uvx strands-solver solve today` (free tier GEMINI_API_KEY optional)

github.com/nimobeeren/s...
GitHub - nimobeeren/strands-solver: A solver for Strands, the New York Times puzzle game.
A solver for Strands, the New York Times puzzle game. - nimobeeren/strands-solver
github.com
I’ve been getting so many chores done while Claude is working 🧽
I try to avoid context switching, even if that means I’ll be less β€œproductive” for a bit while my agent works. I’ll pick up a household chore while daydreaming about the project, perhaps thinking about what to build next. Definitely takes conscious effort not to go full frenzy mode though.
February 9, 2026 at 6:37 PM
I try to avoid context switching, even if that means I’ll be less β€œproductive” for a bit while my agent works. I’ll pick up a household chore while daydreaming about the project, perhaps thinking about what to build next. Definitely takes conscious effort not to go full frenzy mode though.
February 9, 2026 at 6:37 PM
I’ve definitely been feeling this too. I wonder to what degree this can be solved by just making the tools faster. It feels like context switching is the real killer and having to wait for 10-15 mins after each prompt just makes that very likely to happen.
February 9, 2026 at 6:37 PM
Agentic coding is starting to feel cost-bound again for me. I use API-based billing (bc work) and I’m no longer sure that what I build is necessarily worth the price tag.

$20 for a feature is not much, but also not negligible. And I don’t like that I have to consider that before starting.
February 9, 2026 at 6:21 PM
I’m not sure constraints are essential for open-endedness per se, but they do breed creativity.
February 8, 2026 at 2:53 PM
Hmm, that’s a good distinction. Terminals are definitely open-ended in the sense that you can end up in a wild variety of places, as long as they are text-based.
February 8, 2026 at 2:53 PM
I think CLIs are winning precisely because they are *not* very open-ended. You’re fully constraining yourself to text input/output, and that’s great for LLMs.
Why is the AI revolution happening on the command line first? Because the terminal is still the most open-ended general-purpose computing platform available.

Unix Philosophy is All You Need.
there has never been a better time to get terminal-pilled
February 8, 2026 at 12:55 PM
I hope that LLMs will help us make that transition. The hardest part in these large scale migrations (for me) is understanding the exact behavior of the current system. I think an AI that independently explores (by trial-and-error) and documents systems would be super useful.
February 8, 2026 at 12:48 PM
Made a self-updating skill for scaffolding React projects. It runs a script so I get a fast and consistent setup every time, but uses the LLM to update the script based on shadcn docs.

github.com/nimobeeren/s...
github.com
February 7, 2026 at 6:32 PM
Yeah, I was more relieved reading about what it _couldn’t_ do yet than I was excited with what it could
February 6, 2026 at 6:21 AM
How good is Claude Code at Factorio
February 5, 2026 at 8:32 PM
I've been having way more (good) interactions on here for the last few weeks! I was added to a starter pack but I feel like it can't just be that. Probably For You has something to do with it? πŸ’–
February 5, 2026 at 5:11 PM
I think they'll get better taste. They're excellent at doing what you ask if you're unambiguous. But they'll get better at choosing the right thing to do if you're not.
February 5, 2026 at 5:09 PM
Trying to explain nondeterminism in LLMs: it's like talking to clones of the same person but they're in a slightly different mood each conversation
February 5, 2026 at 5:08 PM
Yes, LLMs can help you be less ambiguous! And to be fair, those optimizers do sometimes make use of some tricks to squeeze last few percentage points of performance out (at least so they claim)
February 5, 2026 at 11:42 AM
That’s true, and I guess models are getting better at choosing the right interpretation when there are multiple. It’s just going to be difficult when you rely on the same interpretation being chosen every single time.
February 5, 2026 at 11:39 AM
Rambling into a recording device like it’s the 70s, except I’m not causing psychic damage to the person who is asked to transcribe it
February 5, 2026 at 6:38 AM
The structure is not even strictly necessary for the LLMs, it’s mainly for humans to keep things interpretable and maintainable. You could just word vomit as long as there are no contradictions.
February 4, 2026 at 10:20 PM
The whole β€œprompt engineering” thing still has people believing you need to speak some arcane language to make good use of LLMs. In reality, you just need to be unambiguous, structured and complete. No tricks necessary.
February 4, 2026 at 10:20 PM
I think local and open data will become more and more important, with local-first and AT proto setting good examples
February 4, 2026 at 7:06 PM
prompt request?
February 3, 2026 at 8:50 PM
If it works, yes 😌
February 3, 2026 at 8:41 PM
Sonnet is definitely due for an upgrade! Opus is actually faster for me in most cases because it calls the right tools instead of messing around.
February 3, 2026 at 7:56 PM
Yeah, like please invest that effort into turning your app state into local Markdown and JSON (or SQLite) files so I can use my good AI tools to work with it
i wish all these random programs would stop adding AI assistants. not because AI is bad, but because these programs are bad and i don't want them implementing ai. like adobe acrobat. come on.
February 3, 2026 at 5:09 PM
Really cool! Looks like there’s a lot of potential for skills like this to speed up agentic work, independent of the models themselves.
February 3, 2026 at 7:25 AM