@racetrout.bsky.social
or tracert if you’re of the windows persuasion
Reposted
CALL me_m(a,b)

[oops wrong Jepsen]
December 9, 2025 at 8:02 PM
Good read! Appreciate the GPT-5 primer as a Claude addict. Ended up on a deep-dive on the Platformer site (it’s my weekend now!) and noticed the About page is a bit out of date: “(“Misinformation” is thankfully a lesser concern in a world in which the commander in chief is not a pathological liar.)”
August 8, 2025 at 1:18 AM
That’s such a good way of explaining it. I read code much faster than I can compose code (honestly, 90% of software dev is reading code rather than writing code). So I can read and evaluate the output of an LLM much faster than I can write equivalent code.
April 24, 2025 at 3:08 AM
I write software. If it hallucinates something, my software doesn’t work. I know how to tell good code from bad code, and I know how to write both. The LLM can write good code a decent amount of the time, just as well as I can, and faster.
April 24, 2025 at 2:54 AM
I’m a software developer. LLMs make mistakes writing code, sure. But so I do I. The mistakes are different, but I think they’re roughly equal in frequency and magnitude. So it can do the same kind of work I do at basically the same error rate, but much, much faster.
April 24, 2025 at 2:51 AM
Okay, but what if they produce useful results for a topic I’m an expert in, and save me a bunch of time on something I can do, but slower?
April 24, 2025 at 2:42 AM