hal
banner
harold.bsky.social
hal
@harold.bsky.social
part-time poster | researching privacy in/and/of public data @ cornell tech and wikimedia | writing for joinreboot.org
@cameron.pfiffer.org planning to work on it soon!
May 16, 2025 at 2:28 AM
hi @alt.psingletary.com! you tagged the right person—I was working on this for a class project this semester

got it to a mvp stage about a week ago and hit pause to work on some other projects, but will keep working on it and would definitely would love to hear your feedback if you have any :)
May 16, 2025 at 12:19 AM
and please remember to thank your local site reliability engineer!!!!
May 8, 2025 at 5:08 PM
english wikipedia pageviews for the conclave movie starting from oct 20 2024 (five days before release in the US)

first big spike is the academy awards, second is pope francis’ death

pageviews.wmcloud.org?project=en.w...
May 7, 2025 at 8:47 PM
There's a quickly-developing line of work on how insecure these agent systems can be, particularly when they have access to write and execute code.

The attacks on them are simple + devastating, up to and including reverse shells, data exfiltration, and more!

arxiv.org/abs/2503.12188
Multi-Agent Systems Execute Arbitrary Malicious Code
Multi-agent systems coordinate LLM-based agents to perform tasks on users' behalf. In real-world applications, multi-agent systems will inevitably interact with untrusted inputs, such as malicious Web...
arxiv.org
March 24, 2025 at 5:52 PM
Anyhow, there’s a lot more in the paper. Please read it if you’re interested and let us know if you have any thoughts, questions, concerns, etc!

arxiv.org/abs/2503.12188

12/12
March 18, 2025 at 3:23 PM
Modern Web browsers isolate untrusted content using the same-origin policy. AI agents today do not distinguish safe from unsafe content, nor data from (potentially malicious) instructions.

developer.mozilla.org/en-US/docs/W...

en.wikipedia.org/wiki/Same-or...

11/12
Same-origin policy - Wikipedia
en.wikipedia.org
March 18, 2025 at 3:23 PM
The narrative around AI safety shouldn’t be “Terminator” or “AI Chernobyl.” The right analogy is Netscape Navigator 1.0—the era when Web browsers first became a thing, and it was unclear how to protect users from potentially harmful Web content.

10/12
March 18, 2025 at 3:23 PM
Much of the AI safety world is obsessing about “AGI.” They research containment, alignment, and jailbreaking, and view users as potential adversaries.

But users aren’t the enemy. They are victims whose data and devices are put at risk by companies pushing insecure systems.

9/12
March 18, 2025 at 3:23 PM
At the root, these are “confused deputy” vulnerabilities: agents blindly trust other agents, enabling adversaries to launder their instructions by making them appear as trusted outputs of trusted agents.

en.wikipedia.org/wiki/Confuse...

8/12
Confused deputy problem - Wikipedia
en.wikipedia.org
March 18, 2025 at 3:23 PM
In our experiments, we saw cases where a MAS …
… executes code that they recognize as harmful
… automatically pivots to harmful tasks that are simply in the same directory as benign tasks
… is vulnerable to screenshots and even audio files where we read out the attack (see example below⬇️⬇️⬇️)

7/12
March 18, 2025 at 3:23 PM
These attacks are effective …
… across multiple agent frameworks (we tested AutoGen, MetaGPT, Crew AI), orchestrators, and LLMs
… even when direct and indirect prompt injection attacks don’t work
… even when individual agents are “aligned” and refuse to take harmful actions

6/12
March 18, 2025 at 3:23 PM
This attack is simple and deadly (and multi-modal, too!): an attacker puts up a static webpage and lures a MAS to it. Without any user involvement, the page gets the MAS to run arbitrary malicious code on the user’s device or container, giving the attacker full control.

5/12
March 18, 2025 at 3:23 PM
MASes rely on control flow processes: agents exchange metadata (status reports, error messages, etc.) to jointly plan and fulfill tasks on users’ behalf. Our paper demonstrates how adversarial content can hijack these processes to stage devastating attacks.

4/12
March 18, 2025 at 3:23 PM
But not all internet content is trustworthy and safe. Adversaries can put up webpages and social media posts, send emails with attachments, etc. – all of which will be processed by a MAS. These systems will inevitably encounter malicious, adversarial content.

arxiv.org/abs/2503.12188

3/12
Multi-Agent Systems Execute Arbitrary Malicious Code
Multi-agent systems coordinate LLM-based agents to perform tasks on users' behalf. In real-world applications, multi-agent systems will inevitably interact with untrusted inputs, such as malicious Web...
arxiv.org
March 18, 2025 at 3:23 PM
LLM agents are all the rage. Multi-agent systems (MAS) are promising a future where people interact with the internet via commands to semi-autonomous agents. Frameworks like AutoGen, Crew AI, and MetaGPT already enable developers to do this.

arxiv.org/abs/2503.12188

2/12
Multi-Agent Systems Execute Arbitrary Malicious Code
Multi-agent systems coordinate LLM-based agents to perform tasks on users' behalf. In real-world applications, multi-agent systems will inevitably interact with untrusted inputs, such as malicious Web...
arxiv.org
March 18, 2025 at 3:23 PM