stvno.bsky.social
@stvno.bsky.social
Reposted
I've said it before and I will say it again. There is no way to secure a system when it's potential attack surface is *all of language*
Looks like LLMs are *very* vulnerable to attack via poetic allusion: "curated poetic prompts yielded high attack-success rates (ASR), with some providers exceeding 90% ..."

https://arxiv.org/html/2511.15304v1
November 20, 2025 at 5:23 PM
Autumn is arriving, so are the hoglets in the garden
October 8, 2024 at 8:57 PM