Yuhong Luo
jamesluoyh.bsky.social
Yuhong Luo
@jamesluoyh.bsky.social
Going to NeurIPS 25, San Diego!
Our paper “Fair Representation Learning with Controllable High Confidence Guarantees via Adversarial Inference” (FRG) will be presented at #NeurIPS 2025 in San Diego! Come visit our poster #1411 (December 3, 11-2). We’d love to chat! Paper: arxiv.org/abs/2510.21017
@przemyslslaw.bsky.social
November 26, 2025 at 7:30 PM
Empirically, FRG satisfies the fairness constraint (e.g., demographic parity) for all evaluated downstream models and tasks with high probability and matches or beats SOTAs. Other methods either violate the fairness constraints with non-trivial probability or perform worse.
November 26, 2025 at 7:26 PM
As ML algorithms are probabilistic, no useful model can guarantee a given fairness constraint with certainty for every downstream model. To mitigate this, we propose FRG, the first framework that provides such guarantees with high confidence and a controllable error threshold.
November 26, 2025 at 7:26 PM
It's common today to learn predictive user representations for multiple downstream tasks. But these representations can generate unfair predictions. So, in our recent #NeurIPS25 paper, we propose bounding the unfairness across all models for any task with fine-grained control.
November 26, 2025 at 7:26 PM