Daniel Grady
banner
danielgrady.net
Daniel Grady
@danielgrady.net
It was on the science feed earlier this morning, but I see it is now blocked; thank you!
November 17, 2025 at 5:59 PM
@bossett.social I think this post counts as misinformation. Although widely reported in pop-sci media, there’s actually scant textual evidence to support the idea, and it’s based on a single person’s work.
November 17, 2025 at 2:34 PM
That description really resonates - was suddenly reminded of the few I’ve known who fit that mold, and how similar they are to Yarvin.
June 3, 2025 at 7:28 PM
"I'm working inside the Claude AI web interface, and looking at my older conversations. How can I tell which version of Claude produced these older responses?"

Basically wrong. A right answer would be roughly "You can't, unless you just guess."

claude.ai/share/9bf7a2...
Claude
Talk with Claude, an AI assistant from Anthropic
claude.ai
May 29, 2025 at 5:24 PM
"Anthropic recently launched a new "Max" plan. Does this plan allow me to make API calls?"

Basically right, but confusing. Which to be fair is also true of the docs.

claude.ai/share/33f152...
Claude
Talk with Claude, an AI assistant from Anthropic
claude.ai
May 29, 2025 at 5:24 PM
"How do I set up VSCode Copilot to use Anthropic with personal API keys?"

Wrong. Prompting to search gives an answer that focuses on paid Copilot features, only mentioning the API key option in passing.

claude.ai/share/889658...
Claude
Talk with Claude, an AI assistant from Anthropic
claude.ai
May 29, 2025 at 5:24 PM
"I'm working with FlatBuffers in Python. It seems like there is not a clean, idiomatic way to introspect on the data fields of a payload. Is this right? Why?"

Right. (I think.)

claude.ai/share/ac8c7f...
Claude
Talk with Claude, an AI assistant from Anthropic
claude.ai
May 29, 2025 at 5:24 PM
Even in mathematics, learning doesn’t happen (for me) until I’ve attempted to teach it — “making” the argument, maybe?
May 9, 2025 at 10:31 PM
Echoes of this even in very abstract problem spaces: in data science, the number one thing is to look at the data, literally look at Claudia-Perlich-style, load the data into a spreadsheet and stare at it and explain what you see.
May 9, 2025 at 10:31 PM
Jokes aside, it seems like OpenAI has changed the wording but not the import. Still looking for
@agrobbonta.oag.ca.gov to take action here and prevent this restructuring.
May 5, 2025 at 10:53 PM
Thanks! I love these posts you do with short examples for local inference; they are super helpful.
May 3, 2025 at 2:49 PM
Do you have some default options set? I had to go look through the `llm-mlx` documentation to find `-o unlimited 1` to reproduce your Atlantis example without it getting truncated early.
May 3, 2025 at 2:45 PM
These are all consistent and not inherently evil:

- You're great at your job.
- Your job needs to change for the company to still make sense soon.
- You should make a conscious effort to get good at using this new technology.
April 30, 2025 at 8:01 PM
I'm skeptical about the examples of earlier tech rollouts — I would figure there *would* be groups that top-down drove the adoption of spreadsheets where many employees were resistant to the new technology, and that it was the right decision for the company.
April 30, 2025 at 8:01 PM
Sympathetic to the point, and I think the example of the "more normal policy" is the best part of the piece.
April 30, 2025 at 8:01 PM
"Good" isn't a singular axis. For many, part of "good show" is "makes me optimistic about humans," and on that axis it's a remarkable outlier.
April 23, 2025 at 6:48 PM
Crazy. Always surprises me how many of these one-purpose tools get produced; seems like people would just point to or use Pandoc.
December 13, 2024 at 9:47 PM