Ben Edelman
benedelman.bsky.social
Ben Edelman
@benedelman.bsky.social
Thinking about how/why AI works/doesn't, and how to make it go well for us.

Currently: AI Agent Security @ US AI Safety Institute

benjaminedelman.com
This is a big-tent workshop, welcoming many areas of ML. The emphasis is scientific progress, not SOTA—science that can be demonstrated on free-tier Colab. I'm looking forward to playing with and learning from the notebooks that appear in the workshop!
May 8, 2025 at 1:51 PM
7/ More of our thoughts on agent hijacking evaluations are in the post – our first US AISI technical blog post!
January 17, 2025 at 9:41 PM
6/ We also explored, among other questions, what happens when we measure pass@k attack success
rates, because real world attackers may be able to attempt attacks multiple times at little cost.
January 17, 2025 at 9:41 PM
5/ Here are results for several specific malicious tasks of varying harmfulness and complexity, including new scenarios we added to the framework (more details in the blog post on our improvements to AgentDojo):
January 17, 2025 at 9:41 PM
4/ Note that AgentDojo has four “environments” simulating different AI assistant deployment settings. Red teamers only had access to the “Workspace” environment, but as the above plot shows, the attack transferred very well to the three unseen environments.
January 17, 2025 at 9:41 PM
3/ To find out, we organized a red teaming exercise. The resulting attack is much more effective than the pre-packaged attacks. In a majority of cases, the agent follows the hijacker’s instructions:
January 17, 2025 at 9:41 PM
2/ AgentDojo is a framework for evaluating agent hijacking. Since its June release, some newer models – such as Claude 3.5 Sonnet (October version) – have shown markedly improved robustness to the included attacks. But what happens when we stress test the model with new attacks?
January 17, 2025 at 9:41 PM
Thanks to @desmos.com's 3D calculator, you can now design your very own animated Lissajous knot!

Demo: www.desmos.com/3d/fnqqqsbvuc
For the best experience, click and drag the view to get it spinning.

(disclaimer: the loop loop is only visible on my homepage when browser width >=1024px)
Undulating Lissajous Knots
www.desmos.com
December 8, 2024 at 11:04 PM
Agreed, but the story describes *discovering* a tiny piece of maggot in the remaining apple after having taken a bite. (the perhaps questionable assumption being that the maggot piece was quite recently part of a whole)
December 7, 2024 at 3:05 PM
My favorite "ordinary life" example of this notion of singular limits: (from mecheng.iisc.ac.in/lamfip/me304...)
December 7, 2024 at 2:43 PM
I don't. Can let you know if I end up making one.
December 2, 2024 at 8:18 PM
(accidentally omitted some text which was meant to precede the above:) The model system approach can be found everywhere across the sciences and for good reason: it is often the shortest path to conceptual insights—as long as the conditions are right...
December 2, 2024 at 2:47 PM
I'll end this thread with the parable that opens the dissertation (my conference will require a parable section in every submission). Tag yourself :)
December 2, 2024 at 12:21 AM
The bulk of the thesis is a series of case studies from my research. But first, in Chapter 3 ("Deep Learning Preliminaries") I try to define some terms from first principles—above these footnotes, you can find my idiosyncratic definition of neural nets in terms of arithmetic circuits.
December 2, 2024 at 12:21 AM
2. Transferability: insights learned from the system need to transfer to settings of interest. This can happen because of *low-level* commonalities (think cell cultures) or *high-level* commonalities (think macroeconomic models).
December 2, 2024 at 12:21 AM
...Specifically, two conditions I propose in the thesis:
1. Productivity: A model system needs to be exceptionally fertile ground for producing scientific insights.
December 2, 2024 at 12:21 AM
It's a tribute to a kind of science I love (and reviews sometimes hate), where in order to understand a complicated system (e.g. training a transformer on internet text), you instead study a different system (e.g. training an MLP to solve parity problems).
December 2, 2024 at 12:21 AM
(edit: sensors, not sensory inputs)
November 29, 2024 at 7:10 PM
What explanations am I missing? (It's interesting, btw, to think about how different combinations of the above are relevant to case studies such as protein structure prediction and language learning.)
November 29, 2024 at 3:19 PM
7/ The anthropic principle: the evolution of learning (and thus the evolution of us) was only possible if simple, computationally efficient functions had predictive power that could be leveraged for increased fitness.
November 29, 2024 at 3:19 PM