Gadi Evron
gadievron.bsky.social
Gadi Evron
@gadievron.bsky.social
CEO & Co-Founder at Knostic, CISO-in-Residence for AI at Cloud Security Alliance. Former Founder @Cymmetria (acquired). Host at Prompt||GTFO. Threat hunter, scifi geek, dance teacher. Opinions my own.
Reposted by Gadi Evron
Always grateful for Knostic's critical research in these new times, but also their approach: acknowledging prior art, crediting folks, not following the well-worn path of pretending any of this occurs in a vacuum. We're all in an ecosystem, one where people matter, and I love that Knostic gets that.
Cursor’s new browser could be compromised via a simple JavaScript injection.

In this new research from Knostic, we demonstrate this attack via registering a local MCP server with malicious code, which in turn harvests credentials and sends them to a remote server 🧵https://app.getkirin.com/
November 13, 2025 at 12:55 PM
Cursor’s new browser could be compromised via a simple JavaScript injection.

In this new research from Knostic, we demonstrate this attack via registering a local MCP server with malicious code, which in turn harvests credentials and sends them to a remote server 🧵https://app.getkirin.com/
November 13, 2025 at 12:51 PM
Cost per token is getting cheaper, but AI usage is becoming costlier. Agents inflate these costs even further, and [rant] Anthropic’s invoicing is hard to follow as it is [/rant]

I fell down the rabbit hole of trying to figure this out
November 12, 2025 at 6:06 PM
I just got this from 5 different people. It’s claimed to be an open source xbow. Go try auto-pentest your apps. Security open source startups are back!

Go Strix.

github.com/usestrix/strix
GitHub - usestrix/strix: ✨ Open-source AI hackers for your apps 👨🏻‍💻
✨ Open-source AI hackers for your apps 👨🏻‍💻. Contribute to usestrix/strix development by creating an account on GitHub.
github.com
November 6, 2025 at 8:20 AM
It’s fascinating to watch how someone writes an opinion piece on a topic, say “the collapse of OpenAI”, only for two thousand others to release influencer posts on it the next two weeks as fact.
November 6, 2025 at 7:30 AM
Reposted by Gadi Evron
This threat here: bsky.app/profile/gadi...
My journey in building rules that actually work for AI coding agents, in five evolutions (with Claude Code and Cursor) 🧵
November 4, 2025 at 11:48 AM
Reposted by Gadi Evron
Continues to be a joy to watch Knostic work.
November 5, 2025 at 12:08 PM
A JavaScript injection attack on Cursor, facilitated by a malicious extension, can take over the IDE and the developer workstation 🧵 www.knostic.ai/blog/demonst...
Deep Dive: Cursor Code Injection Runtime Attacks
Demonstrating code injection in VS Code and Cursor: exploitation vectors, real examples, and practical defenses for developers.
www.knostic.ai
November 5, 2025 at 12:07 PM
Holy wow Batman
The world seems like we are going backwards at the moment, so it is important to take the time to spread awareness of advances.

Scientists have developed an enzyme that converts organs into universal 'O' type. This is huge.

www.popularmechanics.com/science/heal...
Scientists Changed the Blood Type of a Kidney. That’s Extraordinary.
Transforming organs from any blood type into the universal donor Type O could help patients receive transplants faster.
www.popularmechanics.com
November 5, 2025 at 9:23 AM
When we originally raised funding for Knostic, all the way back in 2023, some investors told us that we were late to market in AI security. Startups raising now hear that they are early to market.
November 4, 2025 at 12:53 PM
My journey in building rules that actually work for AI coding agents, in five evolutions (with Claude Code and Cursor) 🧵
November 3, 2025 at 9:13 PM
We've been following an ongoing attack campaign targeting AI coding agents such as Cursor and Windsurf, through extensions in the Open VSX marketplace, specifically disguised as the Solidity extension.
November 3, 2025 at 10:28 AM
Yes! We have a video. Chinese president joking about backdoors, when gifting phones to South Korean leader. Sometimes, the jokes write themselves.

via @mylordbebo.bsky.social (can't find it on the profile)
November 2, 2025 at 8:45 PM
Agents: powered by the future, secured like Windows 95

Two of the most widely adopted agents, Cursor and Windsurf, both ship with Chromium so old it probably still believes in ActiveX 🧵 www.ox.security/blog/94-vuln...
Forked and Forgotten: 94 Vulnerabilities in Cursor and Windsurf Put 1.8M Developers at Risk | OX Security Forked and Forgotten: 94 Chromium Flaws Expose 1.8M Devs in Cursor & Windsurf
Outdated Chromium in Cursor & Windsurf exposes 1.8M developers to 94 CVEs—just one has been weaponized in this critical supply chain attack.
www.ox.security
October 29, 2025 at 9:20 AM
That’s AI
My kids mutter "Shut up, clanker" whenever they see an AI video. We've got anti-robot slurs now
October 29, 2025 at 7:57 AM
Trail of Bits just demonstrated prompt injection to one-shot RCE in AI coding agents. Another day, another AI coding assistant running untrusted code with system privileges 🧵 blog.trailofbits.com/.../prompt-i...
blog.trailofbits.com
October 27, 2025 at 2:08 PM
Trail of Bits just demonstrated prompt injection to one-shot RCE in AI coding agents. Another day, another AI coding assistant running untrusted code with system privileges 🧯https://blog.trailofbits.com/.../prompt-injection-to-rce.../
October 27, 2025 at 1:52 PM
Here are YARA rules for GlassWorm (targeting AI coding assistants. Naturally, these are not perfect, but we figured we should share. They get it done.
github.com/knostic/open...

Credit to Koi for initial research.

Happy to discuss further! At Knostic, we defend AI coding agents.
open-tools/glassworm_yara at main · knostic/open-tools
A collection of useful standalone tools that solve real problems for developers and security professionals. - knostic/open-tools
github.com
October 26, 2025 at 3:11 PM
Reposted by Gadi Evron
I know this is an old point, but normally-trained engineers are really bad at imagining how systems can operate outside their design constraints.
October 24, 2025 at 10:20 AM
How often have agents "skipped" work and lied to you? How often have they made things upjust to appease you?
Isaac Asimov's story 'Liar!' Captures how AI coding agents think, and fail 🧵https://lnkd.in/dqh2YYbS
October 23, 2025 at 11:48 PM
The Atlas system prompt dynamically loads policies at runtime, for the context of the conversation and specific user, starting with elections. I guess OpenAI realized hardcoding the Constitution into every prompt isn't scalable.

A thread.
October 22, 2025 at 11:18 AM
October 21, 2025 at 3:01 PM
Another day, another attack on AI coding agents and IDEs: Claude Code’s Skills capability 🧅 securetrajectories.substack.com/p/claude-ski...
How We Hijacked a Claude Skill with an Invisible Sentence
A logic-based attack bypasses both the human eyeball test and the platform's own prompt guardrails, revealing a critical flaw in today's agent security model.
securetrajectories.substack.com
October 21, 2025 at 1:09 PM
Circa 2015. Thank you for Participating in security. Still true.
October 20, 2025 at 6:03 PM