Ryan Atkinson
banner
ryanatkn.com
Ryan Atkinson
@ryanatkn.com
independent web dev, free software, hobby programming 🐢 https://github.com/ryanatkn 💤 https://www.ryanatkn.com/
Pinned
apologies to the people I'm not following back through the Bsky mechanism, but providing that social graph info tied to my IRL identity to all the databrokers out there (shout out, yall working hard) seems like a pretty bad deal

it feels somewhat antisocial but I don't want to publish that info
Count me among those impatient users who learned Rust "too early". In 2015 the 1.0 DX was very rough, but the community has made amazing improvements to the language and tooling. Tried it again last year and now so fond of it, makes me less C-brained (had rejected C++). Likely still Zig for gamedev.
January 7, 2026 at 5:58 PM
AI is a monkey's uncle

Claude Code Opus 4.5 has the juice, I have no other way to put it. Astonishing 1yr improvement

it still can't cd. scares me when it tries to git stash. I guess it's like how it can't play chess verbally? Can't do that kind of state well (yet?). But cd surely could be learned
December 9, 2025 at 6:07 PM
Opus 4.5's improved economy of tokens, combined with its taste and depth, enable it to assess and improve its own output to a degree in docs and design that feels like a significant capability gain. Small shift towards more steering than guiding, like I'm driving a machine through a landscape
December 3, 2025 at 12:42 PM
Claude Code Opus 4.5 makes me reel

there are many failure modes, like it still can't be trusted with cd:

> Oh, I'm in the wrong directory. Let me use absolute paths:

but it's very good. Hyperbolic words good
December 2, 2025 at 1:37 AM
Sonnet 4.5 still cannot be trusted with `cd` but wow is it good

The steady improvement of intent-inference and faithful instruction following has been a fun ride. Implicit style conformance too!

Sonnet 4.5 doesn't have the depth or subtlety of Opus 4.1, but for coding, I rarely reach for Opus now
November 7, 2025 at 10:57 AM
apologies to the people I'm not following back through the Bsky mechanism, but providing that social graph info tied to my IRL identity to all the databrokers out there (shout out, yall working hard) seems like a pretty bad deal

it feels somewhat antisocial but I don't want to publish that info
November 7, 2025 at 10:51 AM
A pattern I'm liking is giving Claude planning tasks while I do my normal investigations in parallel, and then I compare notes after thinking things through myself. Sometimes it's fruitless, and sometimes the tool catches things or offers ideas that improve things or save me a lot of time.
September 21, 2025 at 12:50 PM
in this list I'm reminded of some leadership that made dependency hygiene a point of emphasis, and others who insisted it's fine because hard drive space is cheap
September 9, 2025 at 9:41 AM
Today Claude Code self-reflects poorly when stretched beyond its capacities, even with everything needed in context, like it's over-eager marking unfinished work as complete. Asking it to check its work or "any cleanup?" regularly yields major improvements -- same questions I constantly ask myself.
September 7, 2025 at 11:53 AM
Claude is eager to please to the point of often denying its own better judgment, so I regularly express uncertainty and ask half-formed questions. Encouraging pushback and requesting tradeoff analyses are helpful to get its engineering sensibilities to override its consumer product priorities.
September 4, 2025 at 1:27 PM
For any complexity that can't be handled in one context window, or anything I want more visibility into, I have Claude externalize its todos in TODO_SOMETHING.md docs in the root. It can provide as much or as little detail as I want, I can make edits, and it'll check off work and make adjustments.
September 4, 2025 at 10:48 AM
There are many "tricks" for tools like Claude Code because it's so new and raw. One of my favorites is telling it to leave `// TODO`s in the code when it can't fix something in the current pass -- for the Claude 4 models this avoids a lot of busted reward hacking. Same process I've used for years!
September 4, 2025 at 10:13 AM
also `😅`
August 29, 2025 at 1:54 PM
here's the original conversation, you can see how I had to adjust the prompt to get something I liked

me saying 'more like "robocrap"' is like 'zoom in and search'

I didn't like the taste of the first place, but it was an easy hop from there to something that resonated

claude.ai/share/8336be...
August 29, 2025 at 1:26 PM
you can point your infoscopes anywhere
August 29, 2025 at 1:13 PM
Claude invented AI-rhea when I was looking for slop-like terms, and then both it and ChatGPT reflected the meaning on the first try

just a potty humor example, but it's clear LLMs can synthesize new legible info, which many or most users already know, but it seems widely doubted still
August 29, 2025 at 11:01 AM
Claude didn't come up with infoscope on its own but it did come up with AI-rhea, no google hits at the time, as a vivid specialization of "slop"
August 29, 2025 at 10:35 AM
I like this framing, and they also do a lot more than information retrieval, they synthesize information in the moment with a complexity we don't see in prior knowledge artifacts -- spatial metaphor is one way I try to think about how the process is more than just lookup

bsky.app/profile/ryan...
I like thinking of LLMs as information telescopes, rendering views or summoning portals into their massive latent space
August 29, 2025 at 10:33 AM
Claude Code today cannot be trusted with cd (doesn't follow its state) or sed (can do serious damage, lol amounts), and personally for my current workflows I have git denied too (sometimes this means it fails to search history tho, I need a better solution)
August 21, 2025 at 10:03 AM
code quality is inversely proportional to rocket emoji count
August 21, 2025 at 9:54 AM
"bigger on the inside" feels helpful for LLM intuition, they're little finite information multiverses

"next token predictor" acknowledges linear time but I don't think it's as profoundly limiting as it sounds, especially for machine-compressed time at many hz
August 21, 2025 at 12:37 AM
Dune feels more relevant despite the lack of G/CPUs
August 18, 2025 at 4:00 PM
I don't think the scifi I've consumed has prepared me for all of the cults that AI tools will enable
August 18, 2025 at 3:26 PM
Claude: An infoscope channels humanity's collective intelligence - every conversation adds a star to our shared map of knowledge space. We're all astronomers now, pointing these instruments together, discovering not distant galaxies but the patterns hidden in our own accumulated wisdom.
August 17, 2025 at 4:08 PM
Claude: An infoscope is an LLM as telescope for knowledge - peering into vast latent spaces where information exists as probability clouds. Different models offer different resolutions: Opus sees deeper, Sonnet scouts wider, working in concert like an observatory array mapping possibility itself.
August 17, 2025 at 4:08 PM