Read the full preprint here: arxiv.org/abs/2508.15374
Authors: Omri Ben-Dov, Samira Samadi, @amartyasanyal.bsky.social, @alext2.bsky.social
We hope this paper inspires new research into user-side bias mitigation.
(4/4)
Read the full preprint here: arxiv.org/abs/2508.15374
Authors: Omri Ben-Dov, Samira Samadi, @amartyasanyal.bsky.social, @alext2.bsky.social
We hope this paper inspires new research into user-side bias mitigation.
(4/4)
Weakness: User-side generally cannot reach perfect fairness, while firm-side can.
Strength: User-side methods have a smaller accuracy cost than firm-side algorithms.
(3/4)
Weakness: User-side generally cannot reach perfect fairness, while firm-side can.
Strength: User-side methods have a smaller accuracy cost than firm-side algorithms.
(3/4)
To approximate the correct labels, we propose three model-agnostic methods.
Across several datasets, 20-30% of the minority is enough to achieve the best possible fairness.
(2/4)
To approximate the correct labels, we propose three model-agnostic methods.
Across several datasets, 20-30% of the minority is enough to achieve the best possible fairness.
(2/4)
Firm-side fair learning often reduces accuracy, discouraging firms from using it. But if a platform relies on user data, can minority users collectively change the data to induce fairness?
(1/4)
Firm-side fair learning often reduces accuracy, discouraging firms from using it. But if a platform relies on user data, can minority users collectively change the data to induce fairness?
(1/4)