Cole W
colewestbrook.bsky.social
Cole W
@colewestbrook.bsky.social
Software Eng who writes occasionally. Views and opinions expressed are entirely my own.
Anthropic says that hallucinations “remain an obstacle to fully autonomous cyberattacks.” Not really. Attackers lose little when hallucinated successes happen. In reality, it’s clear that guide-rails and LLM “alignment” are not enough to prevent this new threat. 1/3
December 11, 2025 at 4:36 AM
Only if we can cuddle
March 7, 2025 at 7:10 PM