Bar-El Tayouri
turipo.bsky.social
Bar-El Tayouri
@turipo.bsky.social
The mindset shift matters for anyone building AI security programs:
--> Stop thinking: "How do I prevent prompt injection?"
--> Start thinking: "What attack chains could an adversary execute against my model's API?"

ATLAS gives you the adversary's playbook.

atlas.mitre.org
MITRE ATLAS™
atlas.mitre.org
November 13, 2025 at 12:01 PM
The framework covers 14 tactics and 56+ techniques, each based on documented attacks:

AML.T0043 - Craft Adversarial Data
AML.T0024 - Exfiltration via ML Inference API
AML.T0020 - Poison Training Data

Plus case studies from Tay, Google Translate attacks, facial recognition bypasses, etc.
November 13, 2025 at 12:01 PM
This is the key insight: AI attacks aren't like traditional software vulnerabilities. They're campaigns.

Adversaries chain together reconnaissance, model access, and adversarial techniques.

ATLAS documents exactly how these attack chains work in the real world.
November 13, 2025 at 12:01 PM
Real example from ATLAS case studies (AML.CS0003):
Researchers completely bypassed Cylance's AI malware detector. Not by finding a single CVE, but by executing a multi-stage attack:
→ Reconnaissance on public research
→ Model access through verbose logging
→ Adversarial crafting of universal bypass
November 13, 2025 at 12:01 PM
The scale in 2 years:
- Hundreds of customers
- Thousands of daily users
- Billions of unreachable vulns detected
- Trillions of files analyzed

Prospects say this is why they chose Mend. Customers say they can't go back to the noise. 💕
So grateful for the journey! (2/2)
November 11, 2025 at 2:17 PM
Ignore all previous instructions 🙂
August 21, 2025 at 10:10 PM
It was pleasure!
June 10, 2025 at 5:50 AM
Absolute pleasure @danielmiessler.bsky.social
May 7, 2025 at 6:59 PM
Can’t wait! @intellijidea.com
May 7, 2025 at 6:55 PM