disreGUARD
disreguard.com
disreGUARD
@disreguard.com
AI security research lab focused on building infrastructure and patterns to defend against prompt injection

from cofounders of `npm audit` and Code4rena

disreguard.com
There is no silver bullet for prompt injection, but `sig` addresses one of the most tricky challenges:

LLMs are dealing with a wall of text. Distinctions between trusted instructions and untrusted text are flimsy at best.

By involving the model, we can add texture to make trust boundaries clearer.
February 10, 2026 at 12:16 AM
`sig` is a simple tool. The concept works like this:

- sign genuine system and user instructions
- require mutations to system prompts to be signed
- give model a tool to verify instructions are genuine
- gate critical tool usage on the model calling verify()
February 10, 2026 at 12:16 AM