1a3orn.bsky.social
@1a3orn.bsky.social
1a3orn.com
I have mentally tagged "Anthropic safety papers shoot themselves in just one or two toes, rather than the whole foot" as a reason to be more alarmed about the state of affairs

I do like the innoculation prompting tech tho, it just is neat
November 24, 2025 at 6:40 PM
are Yud takes like this downstream of some world model of his that also leads to his beliefs about AI?

or like, are both his infants-aren't-sentient and AI-kills-all downstream of the "I am a bearer of elite gnosis" character trait, rather than world model
November 24, 2025 at 2:09 PM
I would chose execution by adversarial poetry over the alternatives
November 20, 2025 at 7:15 PM
this is kinda why I own TSMC and AMD, and not NVIDIA and also why I kinda hate myself
November 11, 2025 at 11:42 PM
I mean if you're selling a surveillance state (TM) that probably sounds more attractive.
November 8, 2025 at 6:39 PM
Both of these seem quite necessary?

Hrm I like this frame.
November 8, 2025 at 6:30 PM
So as regards solutions to the latter (multi-agent learning, automated science laboratories, expensive AI-run ML experiments) we're trying to get a more human-like environment.

And as regards solutions to the latter (test-time training, ICL, etc) you're trying to make a more human-like brain.
November 8, 2025 at 6:30 PM
this would have to be Omohundro 2008/2007, then Nick Bostrom 2014, then and also some Yud?

I'm not sure what the other canonical authors would be, there's probably someone I'm missing
November 5, 2025 at 9:35 PM
oh man I kinda want to write this too, I have Strong Opinions on the old orthogonality thesis discussions

I think a blocker for me has been trying to figure out if it was right or wrong, but I think the right frame is more like -- from our perspective what is this just very confused about?
November 5, 2025 at 9:32 PM
this bisk coming to you from more skeptical meditations on "superintelligence"
November 4, 2025 at 7:52 PM