Bogdan Kulynych
bogdankulynych.bsky.social
Bogdan Kulynych
@bogdankulynych.bsky.social
researcher studying privacy, security, reliability, and broader social implications of algorithmic systems · fake doctor working at a real hospital
website: https://kulyny.ch
Speaking on LLMs and Privacy tomorrow at the AMLD @appliedmldays.bsky.social AI in Clinical Care afternoon track – let's chat if you are in Lausanne.
February 10, 2025 at 8:08 PM
We also provide an opacus-compatible accountant (which co-incidentally also does better (ε, δ) accounting as well as it is based on the state-of-the art Connect the Dots accountant).
December 10, 2024 at 3:11 AM
It is also only one pip install away and you can use it in a framework-agnostic way:

# pip install riskcal
December 10, 2024 at 3:11 AM
When we do that, we gain a lot of utility (e.g., 18 p.p. higher classification accuracy in a text sentiment classification task) at the same level of operational privacy risk:
December 10, 2024 at 3:11 AM
When we derive these from a single (ε, δ) pair, we lose *a ton* of information about the privacy guarantees. In this real example with DP-SGD, using the standard interpretation, it seems that there is no privacy (blue). In reality, it's pretty private (orange)!
December 10, 2024 at 3:11 AM
The most common way to derive operational meaning in DP for interpretability is to convert (ε, δ) parameters to notions of attack risk, such as maximum inference accuracy (left) or the trade-off/ROC curve of the inference attacker (right).
December 10, 2024 at 3:11 AM