disreGUARD
disreguard.com
disreGUARD
@disreguard.com
AI security research lab focused on building infrastructure and patterns to defend against prompt injection

from cofounders of `npm audit` and Code4rena

disreguard.com
Can we give an agent a tool to check which instructions it should actually follow?

And if we do, can we make sure it uses it?

Yes, and yes.

disreguard.com/blog/posts/s...
sig: instruction signing for prompt injection defense
We can create a clear trust boundary by signing instructions and giving models a tool to participate in making secure choices
disreguard.com
February 9, 2026 at 11:54 PM
Reposted by disreGUARD
For the last year, I've been heads down working on building infrastructure to better defend against prompt injection.

@disreguard.com is a security research lab focused on tools and methods for ergonomic approaches for the rigorous defense-in-depth needed for agents.

disreguard.com/blog/posts/p...
Injection is inevitable. Disaster is optional.
Prompt injection is an infrastructure problem. We can't prevent it, but we can massively reduce the risk and impact.
disreguard.com
February 9, 2026 at 11:50 PM