Hossein Mirzaei
mirzious.bsky.social
Hossein Mirzaei
@mirzious.bsky.social
Reposted by Hossein Mirzaei
This approach enhances the reliability of trigger reconstruction, making it capable of distinguishing between clean & trojaned models. 🚀

Congrats to all the authors who did an amazing job! 3/4
August 25, 2025 at 7:44 AM
Reposted by Hossein Mirzaei
By employing a diffusion-based generator guided by the target classifier, #DISTIL iteratively produces candidate triggers that align with the model's internal representations associated with malicious behavior. 2/4
August 25, 2025 at 7:44 AM
Reposted by Hossein Mirzaei
My lab has been pushing into explainable, robust, & theoretically-tractable AI models for science 💪

New! Accepted to #ICCV2025 we introduce #DISTIL - led by amazing PhD student @mirzious.bsky.social - we propose a trigger-inversion method for DNNs that reconstructs malicious backdoor triggers 1/4
August 25, 2025 at 7:44 AM
Reposted by Hossein Mirzaei
✨ Introducing a new #SOTA action recognition large multimodal language model: #LLaVAction!

By @shaokaiye.bsky.social Haozhe Qi, @trackingskills.bsky.social and me!

📝 arxiv.org/abs/2503.18712

🤖 mmathislab.github.io/llavaction/

1/n
LLaVAction: Video Action Recognition
LLaVAction: evaluating and training multi-modal large language models for action recognition
mmathislab.github.io
March 25, 2025 at 8:46 AM
Reposted by Hossein Mirzaei
#ICCV2025 submitted = time to 💤😴☕️! But brilliant pushes from Shaokai Ye and co-authors, and @mirzious.bsky.social and co-authors. Looking forward to sharing the works with you all very soon! #ActionRecognition #TrustworthyML
March 8, 2025 at 11:01 AM
Reposted by Hossein Mirzaei
🔥🙏🏼 #AROS 💍 is accepted to #ICLR2025!

So proud of @mirzious.bsky.social - what an awesome way to kick off his first grad school project 👌👌

Check out the arxiv version of the paper, open code (including python package) below ⬇️

TL;DR need more robustness?! #PutARingOnIt 💍
Adversarial robustness is becoming even more critical as #AI systems are deployed in the real-world, but how can we detect outliers (adversarials) without training on them? 

🔥 NEW work by @mirzious.bsky.social a super talented PhD student in my group 🧠🧪 🚀

📊➡️ #AROS💍 arxiv.org/abs/2410.10744

1/2
January 22, 2025 at 5:24 PM
Reposted by Hossein Mirzaei
And a big welcome to @mirzious.bsky.social to Bluesky! 💙🦋👏 - please follow him; he’s a rising star in merging trustworthy, robust AI for science (just check out his CV 🔥💪): scholar.google.com/citations?us...
Hossein Mirzaei
‪PhD student @ Mathis Lab‬ - ‪‪Cited by 268‬‬ - ‪Machine Learning‬
scholar.google.com
November 27, 2024 at 11:27 PM
Reposted by Hossein Mirzaei
Plus the code is #opensource and a Python package for ease of testing and adding to your fav OOD problem 👏

github.com/AdaptiveMoto...

Demo it in Colab, etc!

Stars ⭐️ appreciated! Always helpful to know when to support a code base 😉🥰🍾
GitHub - AdaptiveMotorControlLab/AROS: 💍
💍. Contribute to AdaptiveMotorControlLab/AROS development by creating an account on GitHub.
github.com
November 27, 2024 at 9:18 PM
Reposted by Hossein Mirzaei
#AROS💍 leverages neural ODEs and Lyapunov stability theory to craft an embedding method to smartly detect OOD samples. Strikingly, we can improve performance on popular adversarial detection benchmarks such as CIFAR10 vs CIFAR100 by over 40% 👏

🔥🚀 we are excited to keep pushing this line of work 💪
November 27, 2024 at 9:12 PM
Reposted by Hossein Mirzaei
Adversarial robustness is becoming even more critical as #AI systems are deployed in the real-world, but how can we detect outliers (adversarials) without training on them? 

🔥 NEW work by @mirzious.bsky.social a super talented PhD student in my group 🧠🧪 🚀

📊➡️ #AROS💍 arxiv.org/abs/2410.10744

1/2
November 27, 2024 at 9:12 PM