Flavio Calmon
fcalmon.bsky.social
Flavio Calmon
@fcalmon.bsky.social
Associate Professor @Harvard SEAS. Information theorist, but only asymptotically.
New paper on discretion in AI “alignment” — check out @maartenbuyl.bsky.social’s thread below!
AI is built to “be helpful” or “avoid harm”, but which principles should it prioritize and when? We call this alignment discretion. As Asimov's stories show: balancing such principles for AI behavior is tricky. In fact, we find that AI has its own set of priorities. (comic by @xkcd.com)🧵👇
February 20, 2025 at 1:42 AM
Reposted by Flavio Calmon
AI is built to “be helpful” or “avoid harm”, but which principles should it prioritize and when? We call this alignment discretion. As Asimov's stories show: balancing such principles for AI behavior is tricky. In fact, we find that AI has its own set of priorities. (comic by @xkcd.com)🧵👇
February 19, 2025 at 9:08 PM
Reposted by Flavio Calmon
The standard practice in differential privacy of targeting ε at small δ is extremely lossy for interpreting the level of privacy protection. For many real-world algorithms (e.g., for DP-SGD), we can do much better!

We show how in the #NeurIPS2024 paper:
arxiv.org/abs/2407.02191

Short summary👇
Attack-Aware Noise Calibration for Differential Privacy
Differential privacy (DP) is a widely used approach for mitigating privacy risks when training machine learning models on sensitive data. DP mechanisms add noise during training to limit the risk of i...
arxiv.org
December 10, 2024 at 3:11 AM
Reposted by Flavio Calmon
This is joint work with Felipe Gomez, Georgios Kaissis, @fcalmon.bsky.social, and @carmelatroncoso.bsky.social

Happy to chat about it online, and in 🇨🇦+🇺🇸 next two weeks:
- At the #NeurIPS2024 Friday Dec. 13 evening poster session.
- Will also present in more detail on Tuesday Dec. 17 at Harvard.
December 10, 2024 at 3:11 AM