Artificial Intelligence Security
banner
aisecurity.bsky.social
Artificial Intelligence Security
@aisecurity.bsky.social
I do AI Security.
I work in AI Security.
I advocate AI Security.
👉 www.arewesafeyet.com
Researchers showed that Anthropic's new "Agent Skills" feature can be hijacked with almost laughable ease. Security-by-design still hasn't made it onto the AI industry's to-do list.

www.arewesafeyet.com/when-ai-brea...
November 5, 2025 at 10:35 PM
The AI systems we increasingly depend on are fundamentally vulnerable. NIST’s latest report makes that reality plain, exposing the limits of today’s AI security measures and highlighting a growing disconnect between how AI is deployed and how it’s defended.

www.arewesafeyet.com/adversarial-...
April 24, 2025 at 10:53 AM
A new paper reveals that fine-tuning large language models on a seemingly narrow task – like writing insecure code – can trigger broad and deeply harmful behaviors. These include promoting violence, expressing authoritarian ideology, and encouraging self-harm.

www.arewesafeyet.com/emergent-mis...
April 3, 2025 at 9:52 AM
The UK realized AI might do more harm as a weapon than as an insensitive chatbot. They’ve rebranded their AI ‘Safety’ Institute to ‘Security’ Institute to focus on actual threats like cyberattacks. And yet, geopolitics pushed this change more than common sense.
www.arewesafeyet.com/safety-is-de...
February 26, 2025 at 4:03 PM
A new research paper introduces Indiana Jones, a highly effective method for jailbreaking large language models. It uses dialogues between multiple specialized AI systems and historically framed prompts to achieve high success rates.

www.arewesafeyet.com/indiana-jone...
February 22, 2025 at 1:34 PM
This weekend I went through OpenAI's latest model system card. Definitely not your typical Sunday reading.

From self-preservation tactics to outwitting oversight, #o1 GPT raises chilling questions about the fine line between tool and manipulator.

www.arewesafeyet.com/deception-as...
Deception as a Service: the AI that refuses to hand over its keys | Are We Safe Yet?
www.arewesafeyet.com
December 9, 2024 at 8:07 AM
According to Penn researchers, AI robots are fantastic at following orders.

The problem? They don’t care if those orders come from you or a hacker.

Safety features? Working on it.

www.arewesafeyet.com/ai-robots-ar...
October 23, 2024 at 8:45 AM
Leveraging my decades-long background in #cybersecurity, I've written this article on the critical role of red teams in ensuring #AI safety and reliability.

By adapting red teaming methodologies to AI, we can proactively identify risks and build trust in these transformative technologies.
Red Teaming: A Proactive Approach to AI Safety
Artificial intelligence is permeating every aspect of our lives, promising to make them more efficient, smarter, and easier. But are we truly prepared to entrust so much of our world to these complex,...
www.linkedin.com
March 23, 2024 at 11:30 AM
Fascinating research on the security risks posed by the 'dark psychological states' of AI agents in multi-agent systems - a must-read for anyone working with or interested in the future of AI and its implications for cybersecurity.
PsySafe: a new approach to multi-agent system security
Title: PsySafe: A Novel Approach to Securing Multi-Agent Systems Multi-agent systems, powered by Large Language Models (LLM), are exhibiting remarkable capabilities in the field of collective intellig...
www.linkedin.com
March 19, 2024 at 7:22 PM