Christian Wressnegger
banner
chwress.bsky.social
Christian Wressnegger
@chwress.bsky.social
Professor in Computer Security at Karlsruhe Institute of Technology (KIT)

https://intellisec.de/chris
You prefer to organize your workshop in Europe? We've got you covered! We extended the call for workshops at EuroS&P 2026 to give you a few more days to make the move 😎 See you in Lisbon 🇵🇹

🌐 https://eurosp2026.ieee-security.org/cfw.html
⏱️ Deadline: Oct 2̶4̶t̶h̶ 30th AoE
📍Lisbon, PT
October 20, 2025 at 8:27 PM
The call for workshops at EuroS&P 2026 is officially open!
EuroS&P is the premier, European forum for security & privacy research. The main conference is accompanied by a series of workshops. Be part of it! 😎

🌐 https://eurosp2026.ieee-security.org/cfw.html
⏱️ Deadline: Oct 24th AoE
📍Lisbon, PT
October 2, 2025 at 6:34 PM
LLM-powered code assistants might suggest vulnerable code to specific user groups. Old news? Well, in contrast to prior attacks of this kind, our "Generalized Adversarial Code Suggestions" (AsiaCCS 2025) impose no restrictions on the vulnerabilities.

🌐 https://intellisec.de/research/adv-code

(1/3)
August 27, 2025 at 12:55 AM
In doing so, we not only excel in backdoor removal with a *worst case* remaining ASR of 0.48% (on Tiny-ImageNet with a ResNet34) but also in maintaining accuracy on the primary task of 56.65% (no defense) and 56.31% (HARVEY) *in the worst case* across different backdooring attacks. (4/5)
February 21, 2025 at 2:01 PM
Our method refines this reference model thru a combination of splitting poisonous and benign samples, learning on poisonous and unlearning benign samples, and splitting the dataset again over multiple rounds. Eventually, we use the samples from the final split to train a perfectly benign model (3/5)
February 21, 2025 at 2:01 PM
I'm happy to share that our paper "Learning the Backdoor to Remove the Backdoor" got accepted at #AAAI2025 as oral presentation (top 5%). Great job @qzhao903.bsky.social 💪 @kastel-labs.bsky.social @kit.edu (1/5)

🗞️ https://intellisec.de/pubs/2025-aaai.pdf
💻️ https://intellisec.de/research/harvey
February 21, 2025 at 2:01 PM
Makrut attacks exploit the discrepancy between soft and hard labels of the explanation technique LIME to mount different attacks that do transfer to other explainers such as SHAP also. (2/3)
December 13, 2024 at 2:36 AM
Later this week, we present Makrut at @acsacconf.bsky.social 2024. Achyut found a way to conduct explanation-aware backdoors against popular black-box XAI techniques. (1/3)

💻️ xaisec.org/makrut
🗞️ intellisec.de/pubs/2024-ac...
December 13, 2024 at 2:36 AM