4/ The big open question remains: how close are optimal black-box attacks to this theoretical optimum? The gap might be negligible, suggesting black-box methods suffice—or significant, showing parameter access offers better empirical upper bounds 🤔
December 18, 2024 at 3:39 AM
4/ The big open question remains: how close are optimal black-box attacks to this theoretical optimum? The gap might be negligible, suggesting black-box methods suffice—or significant, showing parameter access offers better empirical upper bounds 🤔
3/ Our work challenges this assumption head-on. By carefully analyzing SGD dynamics, we prove that optimal membership inference requires white-box access to model parameters. Our Inverse Hessian Attack (IHA) serves as a proof of concept that parameter access helps!
December 18, 2024 at 3:39 AM
3/ Our work challenges this assumption head-on. By carefully analyzing SGD dynamics, we prove that optimal membership inference requires white-box access to model parameters. Our Inverse Hessian Attack (IHA) serves as a proof of concept that parameter access helps!
2/ Prior work (e.g., proceedings.mlr.press/v97/sablayro...) suggests black-box access is optimal for membership inference—assuming SGLD as the learning algorithm. But these assumptions break down for models trained with SGD
December 18, 2024 at 3:39 AM
2/ Prior work (e.g., proceedings.mlr.press/v97/sablayro...) suggests black-box access is optimal for membership inference—assuming SGLD as the learning algorithm. But these assumptions break down for models trained with SGD