Omri Ben-Dov
omribendov.bsky.social
Omri Ben-Dov
@omribendov.bsky.social
PhD student at the Max Planck Institute for Intelligent Systems

https://beomri.github.io/
We also evaluate our methods and theoretically analyze their limitations.

Read the full preprint here: arxiv.org/abs/2508.15374
Authors: Omri Ben-Dov, Samira Samadi, @amartyasanyal.bsky.social, @alext2.bsky.social

We hope this paper inspires new research into user-side bias mitigation.
(4/4)
August 22, 2025 at 6:45 AM
How do user-side methods compare with firm-side fair learning?

Weakness: User-side generally cannot reach perfect fairness, while firm-side can.

Strength: User-side methods have a smaller accuracy cost than firm-side algorithms.

(3/4)
August 22, 2025 at 6:45 AM
We show how algorithmic collective action can align with fairness, leading the collective to a relabeling strategy.

To approximate the correct labels, we propose three model-agnostic methods.

Across several datasets, 20-30% of the minority is enough to achieve the best possible fairness.
(2/4)
August 22, 2025 at 6:45 AM