Christian Wressnegger
banner
chwress.bsky.social
Christian Wressnegger
@chwress.bsky.social
Professor in Computer Security at Karlsruhe Institute of Technology (KIT)

https://intellisec.de/chris
Please, reach out to Vera and me if you have any questions.

CC: @kastel-labs.bsky.social @kitinformatik.bsky.social @kit.edu #KITKarlsruhe
403 Forbidden
kastel-labs.bsky.social
October 20, 2025 at 8:27 PM
Vera Rimmer (DistriNet, KU Leuven) and I are chairing the selection. Please, reach out to us if you have any questions.

CC: @kastel-labs.bsky.social @kitinformatik.bsky.social @kit.edu #KITKarlsruhe
403 Forbidden
astel-labs.bsky.social
October 2, 2025 at 6:34 PM
Max is currently in Hanoi, Vietnam and will present the paper today. Make sure not to miss it if you are at the conference.

🕚 August 27th 11:00 local time right after the break
📍 Session 2, Ballroom 2

(3/3)
August 27, 2025 at 12:55 AM
Also this project started out as a Master's thesis @kit.edu @kitinformatik.bsky.social @kastel-labs.bsky.social
Karl did an amazing job 💪 He pushed super-hard for the best possible result, which eventually was accepted at AsiaCCS 2025. Congratz again 🥳🎉

(2/3)
August 27, 2025 at 12:55 AM
Qi is going to present our method, HARVEY, in Philadelphia at #AAAI2025 on Sunday March 2, 2pm. See you there! 😎 (5/5)

🌐 https://aaai.org/conference/aaai/aaai-25/program-overview/
aaai.org
February 21, 2025 at 2:01 PM
In doing so, we not only excel in backdoor removal with a *worst case* remaining ASR of 0.48% (on Tiny-ImageNet with a ResNet34) but also in maintaining accuracy on the primary task of 56.65% (no defense) and 56.31% (HARVEY) *in the worst case* across different backdooring attacks. (4/5)
February 21, 2025 at 2:01 PM
Our method refines this reference model thru a combination of splitting poisonous and benign samples, learning on poisonous and unlearning benign samples, and splitting the dataset again over multiple rounds. Eventually, we use the samples from the final split to train a perfectly benign model (3/5)
February 21, 2025 at 2:01 PM
The idea is to remove poisonous samples that might introduce a backdoor during training. While learning a benign model is difficult in this setting, it is rather easy to learn a strongly backdoored model. This strongly backdoored model can serve as an oracle to find poisonous samples (2/5)
February 21, 2025 at 2:01 PM