Suha
Suha
@suhacker.bsky.social
AI/ML security
https://suhacker.ai
After a little over 5 years at Trail of Bits, I have decided to move on. I’m exceptionally excited about this new chapter. There’s so much more work to be done in securing AI/ML systems and I’m looking forward to what's ahead.
September 12, 2025 at 12:10 AM
What if you sent a seemingly harmless image to an LLM and it suddenly exfiltrated your data? Check out our new blog post where we break AI systems by crafting images that reveal prompt injections when downscaled. We’re also releasing a tool to try this attack. blog.trailofbits.com/2025/08/21/w...
Weaponizing image scaling against production AI systems
In this blog post, we’ll detail how attackers can exploit image scaling on Gemini CLI, Vertex AI Studio, Gemini’s web and API interfaces, Google Assistant, Genspark, and other production AI systems. W...
blog.trailofbits.com
August 21, 2025 at 5:36 PM
Reposted by Suha
So, we wrote a neural net library entirely in LaTeX...
April 1, 2025 at 12:30 PM
Reposted by Suha
KNN + topic detection getting a big glow-up www.anthropic.com/research/clio
Clio: Privacy-preserving insights into real-world AI use
A blog post describing Anthropic’s new system, Clio, for analyzing how people use AI while maintaining their privacy
www.anthropic.com
December 13, 2024 at 12:06 PM
Reposted by Suha
Rather than trying to do advent of code, I'm doing advent of papers!
jimmyhmiller.github.io/advent-of-pa...

Hopefully I can read and share some of weirder computer related papers.

First paper is Elephant 2000 by John McCarthy. Did you know he didn't just make lisp? Wonderful paper, worth a read.
Advent of Papers (2024)
jimmyhmiller.github.io
December 2, 2024 at 3:30 AM
Reposted by Suha
trying to explain the OSI model to an american: imagine if a burger had 7 patties
December 8, 2024 at 1:35 PM
Reposted by Suha
(someone used a carefully crafted branch name to inject a crypto miner into a popular Python package: github.com/ultralytics/...)
Discrepancy between what's in GitHub and what's been published to PyPI for v8.3.41 · Issue #18027 · ultralytics/ultralytics
Bug Code in the published wheel 8.3.41 is not what's in GitHub and appears to invoke mining. Users of ultralytics who install 8.3.41 will unknowingly execute an xmrig miner. Examining the file util...
github.com
December 6, 2024 at 3:28 AM
Reposted by Suha
Someone tried to reply to my blog post about avoiding PGP with anti-furry hate, so now I have to edit it to include more furry stickers.

soatok.blog/2024/11/15/w...
What To Use Instead of PGP - Dhole Moments
It’s been more than five years since The PGP Problem was published, and I still hear from people who believe that using PGP (whether GnuPG or another OpenPGP implementation) is a thing they s…
soatok.blog
November 16, 2024 at 7:41 AM
Reposted by Suha
Women in AI: Heidy Khlaaf, safety engineering director at Trail of Bits
Women in AI: Heidy Khlaaf, safety engineering director at Trail of Bits
To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces…
tcrn.ch
March 10, 2024 at 12:34 PM
My team at Trail of Bits added modules for modular analysis, polyglots, and PyTorch to Fickling, a pickle security tool tailored for ML use cases.

Fun Fact: Fickling can now differentiate and identify the various PyTorch file formats out there.

blog.trailofbits.com/2024/03/04/r...
March 4, 2024 at 3:16 PM
Reposted by Suha
Thinking about Dan Kaminsky's quote this morning about the necessary lies we tell ourselves about computers. Specifically, the myth of boundaries between users. Great write-up by @lhn.bsky.social on the "LeftoverLocals" GPU vuln. Nice work by the Trail of Bits team.
A Flaw in Millions of Apple, AMD, and Qualcomm GPUs Could Expose AI Data
Patching every device affected by the LeftoverLocals vulnerability—which includes some iPhones, iPads, and Macs—may prove difficult.
www.wired.com
January 16, 2024 at 5:40 PM
Reposted by Suha
Specifically, int.to_bytes and int.from_bytes default to big-endian, since py3.11. Previously, you had to explicitly specify which you wanted.

I wanted LE but forgot to specify, and my code failed in really non-obvious ways...
November 30, 2023 at 3:45 PM
I got to work on a security review of the YOLOv7 vision model. The blog post and report are out now!

Fun fact: There are TorchScript model differentials!

blog.trailofbits.com/2023/11/15/a...
Assessing the security posture of a widely used vision model: YOLOv7
By Alvin Crighton, Anusha Ghosh, Suha Hussain, Heidy Khlaaf, and Jim Miller TL;DR: We identified 11 security vulnerabilities in YOLOv7, a popular computer vision framework, that could enable attack…
blog.trailofbits.com
November 16, 2023 at 7:02 AM
Reposted by Suha
I presented at HackLu about oddities of existing file formats and lessons learned along the way.
Consider it a teaser, as I presented 1/3 of the slide deck (to be released soon).
www.youtube.com/watch?v=6OJ9...
Hack.lu 2023: Do's And Don'ts In File Formats - Ange Albertini
www.youtube.com
October 19, 2023 at 7:43 AM
Reposted by Suha
Neopets taught so many kids how to code, but it taught me how to hack the system by creating multiple accounts and transferring items just up to the limit where you wouldn’t get caught. And anyway, today I’m a cyber lawyer.
October 17, 2023 at 1:47 AM
Reposted by Suha
Hi, I’d like to return these turtles. They don’t do karate
October 13, 2023 at 12:49 AM
Reposted by Suha
These lists may be useful for those of us trying to develop an alternative to ML Twitter, now that it's 40% influencer spam and 20% a war between sci-fi subcultures. I'm on some of these discords and reading some of these newsletters, but I think I'll add 2 or 3 more. #MLsky #cssky
October 10, 2023 at 8:55 PM
Reposted by Suha
Enormous thank you to PyData Amsterdam for inviting me to keynote at a beautiful venue! Slides and notes from my talk, "Build and keep your context window" are all here: vickiboykis.com/2023/09/13/b...
September 14, 2023 at 12:10 PM
Reposted by Suha
I think about this a lot xkcd.com/2044/
September 13, 2023 at 9:51 PM
Reposted by Suha
ICYMI: This is **critical** work for AI ethics / safety / security / regulation right now: Verifying that a model is fitted on a given dataset.
https://arxiv.org/abs/2307.00682
Tools for Verifying Neural Models' Training Data
It is important that consumers and regulators can verify the provenance of large neural models to evaluate their capabilities and risks. We introduce the concept of a "Proof-of-Training-Data": any...
arxiv.org
July 18, 2023 at 1:19 PM
Reposted by Suha
I’ve conjectured this for years, but seeing Papernot and Shumailov on the paper makes me feel really confident in the findings: https://arxiv.org/abs/2305.17493

Existential risk 🙄🙄🙄🙄
June 23, 2023 at 5:17 PM
Reposted by Suha
So remember the "mango pudding" LLM backdooring attack? How safe do you feel using these models now?
July 3, 2023 at 1:40 PM