We show how to identify training samples most vulnerable to membership inference attacks - FOR FREE, using artifacts naturally available during training! No shadow models needed.
Learn more: computationalprivacy.github.io/loss_traces/
Thread below 🧵
We show how to identify training samples most vulnerable to membership inference attacks - FOR FREE, using artifacts naturally available during training! No shadow models needed.
Learn more: computationalprivacy.github.io/loss_traces/
Thread below 🧵
If your answer is “we checked Distance to Closest Record (DCR),” then… we might have bad news for you.
Our latest work shows DCR and other proxy metrics to be inadequate measures of the privacy risk of synthetic data.
If your answer is “we checked Distance to Closest Record (DCR),” then… we might have bad news for you.
Our latest work shows DCR and other proxy metrics to be inadequate measures of the privacy risk of synthetic data.
📅 February 4th, 6pm, Imperial College London
We are happy to announce the new date for the first Privacy in ML Meetup @ Imperial, bringing together researchers from across academia and industry.
RSVP: www.imperial.ac.uk/events/18318...
📅 February 4th, 6pm, Imperial College London
We are happy to announce the new date for the first Privacy in ML Meetup @ Imperial, bringing together researchers from across academia and industry.
RSVP: www.imperial.ac.uk/events/18318...