1⃣ formalise the online learning unlearning (OLU) problem setting
2⃣ propose two styles of OLU algorithms
3⃣ In the Online Cvx Optimisation (OCO) framework, we nearly match the Regret guarantees of standard OCO without unlearning
1⃣ formalise the online learning unlearning (OLU) problem setting
2⃣ propose two styles of OLU algorithms
3⃣ In the Online Cvx Optimisation (OCO) framework, we nearly match the Regret guarantees of standard OCO without unlearning
»Differentially Private Steering for Large Language Model Alignment« by @anmolgoel.bsky.social, Yaxi Hu, Iryna Gurevych (@igurevych.bsky.social) & Amartya Sanyal (@amartyasanyal.bsky.social)
(2/🧵)
»Differentially Private Steering for Large Language Model Alignment« by @anmolgoel.bsky.social, Yaxi Hu, Iryna Gurevych (@igurevych.bsky.social) & Amartya Sanyal (@amartyasanyal.bsky.social)
(2/🧵)
Synthetic data algorithms that don't provably account for privacy probably doesn't provide privacy.
But there are private synthetic data generation algorithms that do like @gautamkamath.com linked above.
Synthetic data algorithms that don't provably account for privacy probably doesn't provide privacy.
But there are private synthetic data generation algorithms that do like @gautamkamath.com linked above.
The point is perhaps that augmentations, by themselves, don’t inherently guarantee an increase or decrease in privacy.
The point is perhaps that augmentations, by themselves, don’t inherently guarantee an increase or decrease in privacy.