michele-miranda.bsky.social
@michele-miranda.bsky.social
If you're interested in the intersection of #AI and #Privacy, or if you're working with #LLMs, we invite you to read our paper. Feedback and discussions are highly welcome! 💬
February 12, 2025 at 4:19 PM
This work is a collaboration with
@esruzzetti.bsky.social , @asantilli.bsky.social, @fmz8.bsky.social , Sébastien Bratières, and
Emanuele Rodolà. 🙌 It's part of the EU project DataTools4Heart joined by Translated.
February 12, 2025 at 4:19 PM
📖 The core approaches discussed in our survey are available in this GitHub repository:
github.com/michele17284...
GitHub - michele17284/Awesome-Privacy-Preserving-LLMs: Collection of all the papers talking about/relevant to the topic of privacy-preserving LLMs
Collection of all the papers talking about/relevant to the topic of privacy-preserving LLMs - michele17284/Awesome-Privacy-Preserving-LLMs
github.com
February 12, 2025 at 4:19 PM
We also review available libraries to implement these privacy mechanisms into models.
February 12, 2025 at 4:19 PM
To address these challenges, we explore comprehensive solutions for integrating privacy mechanisms throughout the learning pipeline—from data anonymization and differential privacy to machine unlearning techniques. 🛡️
February 12, 2025 at 4:19 PM
We examine threats by reviewing privacy attacks on LLMs such as Training Data Extraction, Membership Inference, and Model Inversion, along with their implications. ⚠️
February 12, 2025 at 4:19 PM
Large Language Models are incredibly powerful but pose significant privacy risks when trained on private or sensitive data—especially in critical domains like healthcare🏥.
February 12, 2025 at 4:19 PM