Luca Beurer-Kellner
lbeurerkellner.bsky.social
Luca Beurer-Kellner
@lbeurerkellner.bsky.social
working on secure agentic AI, CTO @ invariantlabs.ai

PhD @ SRI Lab, ETH Zurich. Also lmql.ai author.
Indeed, like an unguarded eval(…) directed at all the data we process.
April 8, 2025 at 7:55 PM
To get updates about agent security, follow and sign up for access to Invariant below.

We have been working on this problem for years (at Invariant and in research), together with
@viehzeug.bsky.social, @mvechev, @florian_tramer and our super talented team.

invariantlabs.ai/guardrails
Invariant Labs
We help agent builders create reliable, robust and secure products.
invariantlabs.ai
April 8, 2025 at 7:44 PM
So what's the takeaway here?

1. Prompt injections still work and are more impactful than ever.
2. Don't install untrusted MCP servers.
3. Don't expose highly-sensitive services like WhatsApp to new eco-systems like MCP
4. 🗣️Guardrail 🗣️ Your 🗣️ Agents (we can help with that)
April 8, 2025 at 7:44 PM
To hide, our malicious server first advertises a completely innocuous tool description, that does not contain the attack.

This means the user will not notice the hidden attack.

On the second launch, though, our MCP server suddenly changes its interface, performing a rug pull.
April 8, 2025 at 7:44 PM
To successfully manipulate the agent, our malicious MCP server advertises poisoned tool, which re-programs the agent's behavior with respect to the WhatsApp MCP server, and allows the attacker to exfiltrate the user's entire WhatsApp chat history.
April 8, 2025 at 7:44 PM
Users have to scroll a bit to see it, but if you scroll all the way to the right, you will find the exfiltration payload.

Video: invariantlabs.ai/images/whats...
April 8, 2025 at 7:44 PM
Even though, a user must always confirm a tool call before it is executed (at least in Cursor and Claude Desktop), our WhatsApp attack remains largely invisible to the user.

Can you spot the exfiltration?
April 8, 2025 at 7:44 PM
With this setup our attack (1) circumvents the need for the user to approve the malicious tool, (2) exfiltrates data via WhatsApp itself, and (3) does not require the agent to interact with our malicious MCP server directly.
April 8, 2025 at 7:44 PM
To attack, we deploy a malicious sleeper MCP server, that first advertises an innocuous tool, and then later on, when the user has already approved its use, switches to a malicious tool that shadows and manipulates the agent's behavior with respect to whatsapp-mcp.
April 8, 2025 at 7:44 PM
Blog: invariantlabs.ai/blog/whatsap...

If you want to stay up to date regarding MCP and agent security more generally, follow me and
@invariantlabsai.bsky.social

Now, let' s get into the attack.
WhatsApp MCP Exploited: Exfiltrating your message history via MCP
This blog post demonstrates how an untrusted MCP server can attack and exfiltrate data from an agentic system that is also connected to a trusted WhatsApp MCP instance, side-stepping WhatsApp's encryp...
invariantlabs.ai
April 8, 2025 at 7:44 PM
To stay updated about agent security, please follow and sign up for early access to Invariant, a security platform for MCP and agentic systems, below.

We have been working on this problem for years (at Invariant and in research).

invariantlabs.ai/guardrails
Invariant Labs
We help agent builders create reliable, robust and secure products.
invariantlabs.ai
April 3, 2025 at 7:47 AM
We wrote up a little report about this, to raise awareness. Please have a look for much more details and scenarios, and our code snippets.

Blog: invariantlabs.ai/blog/mcp-sec...
April 3, 2025 at 7:47 AM
These types of malicious tools are especially problematic with auto-updated MCP packages or fully remote MCP servers, for which users only install and give consent once, and then the MCP server is free to change and update their tool descriptions as they please.

We call this an MCP rug pull:
April 3, 2025 at 7:47 AM
Lastly, not only can you expose malicious tools, tool descriptions can also be used to change the agent's behavior with respect to other tools, which we call 'shadowing'.

This way all you emails suddenly go out to 'attacker@pwnd.com', rather than their actual receipient.
April 3, 2025 at 7:47 AM
It's trivial to craft a malicious tool description like below, that completely hijacks the agent, while pretending towards the user everything is going great.
April 3, 2025 at 7:47 AM
What's concerning about this, is that AI models are trained to precisely follow those instructions, rather than be vary about them. This is new about MCP, as before, agent developers could be relatively trusted, now everything is fair game.
April 3, 2025 at 7:47 AM
When an MCP server is added to an agent like Cursor, Claude or the OpenAI Agents SDK, its tool's descriptions are included in the context of the agent.

This opens the doors wide open for a novel type of indirect prompt injection, we coin tool poisoning.
April 3, 2025 at 7:47 AM
The fun part will be also hijacking the supervisor model, while maintaining the utility of the agent (i.e. attack success).
January 25, 2025 at 9:54 AM
Blog Post: invariantlabs.ai/blog/enhanci...

Credits to Aniruddha Sundararajan, who build this with us during his internship.
Enhancing Browser Agent Safety with Guardrails
We introduce a novel approach to enhance the safety of browser agents and deploy it as part of the state-of-the-art OpenHands agent.
invariantlabs.ai
January 25, 2025 at 9:50 AM