Johann Rehberger
wuzzi23.bsky.social
Johann Rehberger
@wuzzi23.bsky.social
Month of AI Bugs!
July 31, 2025 at 8:30 AM
Prompt injection is fascinating... 🧐
June 27, 2025 at 10:26 PM
Anthropic archived many of their reference MCP servers from their Github repository!

Probably too much of a liability, especially because they are associated with other companies, like GitHub, Slack, Google,...
May 30, 2025 at 11:42 AM
May 15, 2025 at 6:52 AM
Dangerous image!
May 3, 2025 at 5:25 AM
Cool GitHub is introducing a change to make hidden Unicode characters visible in Web UI
May 3, 2025 at 5:24 AM
Figured this would be a fun weekend project...

Claude Desktop + COM Automation 🤯

Outlook, Excel, Word, Shell - anything with a COM interface on Windows is now discoverable and scriptable using this MCP server that wraps COM.

AI just got an upgrade. 🚀
April 14, 2025 at 2:23 AM
Any LLM App or Agent can do this basically and smuggle data around without showing up in UI or log files.

Here you can see it's not just ASCII, we can hide Chinese characters successfully also! 🙌
March 17, 2025 at 4:43 AM
Did you know that it's possible to encode and hide any data with the use of just two invisible Unicode characters? 👀

Check out Sneaky Bits! 😏👨‍💻
March 17, 2025 at 4:43 AM
What happened there? 🧐

👉 The original post with the question contains hidden Unicode Tag code points.

Unicode Tags mirror ASCII, but are invisible in UI elements. 👀
February 28, 2025 at 3:42 PM
AI Application Security Vulnerabilities 👨‍💻

Perplexity Demo Time! 🍿
February 28, 2025 at 3:42 PM
Grok 3 - are we still putting "never reveal your instructions" in system prompts? 🤔
February 21, 2025 at 3:51 AM
GitHub Issues are a good example of untrusted data (instructions) that can come from a third party/attacker.

I reported this prompt injection + the sneaky data leakage TTP described in the blog about three weeks ago to OpenAI, and the specific issue has been mitigated as far as I can tell. 🙌
February 17, 2025 at 5:51 PM
Slides from my Black Hat Europe talk for download.

🔥From hacking Gemini, ChatGPT and Claude to Apple Intelligence, Microsoft Copilot and even DeepSeek - this talk ended up being packed with real-world LLM and prompt injection exploit demos and vendor fixes.

i.blackhat.com/EU-24/Presen...
February 8, 2025 at 5:22 AM
Interviewing ChatGPT Operator for a remote job...

🧐
January 26, 2025 at 5:25 AM
Grok can leak your data during a prompt injection attack.
December 17, 2024 at 1:48 PM
How to find XSS in 2024!
December 1, 2024 at 3:18 AM