Existing group fairness-aware training methods fall into two categories: re-weighting underrepresented groups according to certain rules, or using regularization terms such as smoothed approximations of fairness metrics or surrogate statistical quantities. While each category has its own strength in applicability or performance when compared to each other, their successful performances are typically limited to specific cases. To that end, we propose a new approach called FairDRO, which takes advantage of both categories through a classwise group distributionally robust optimization (DRO) framework. Our method unifies re-weighting and regularization by incorporating a well-justified group fairness metric into the objective as regularization, but solving it through a principled re-weighting strategy. To optimize our resulting objective efficiently, we adopt an iterative algorithm and consequently develop two variants of FairDRO algorithm depending on the choice of surrogate loss. For in-depth understanding, we derive three theoretical results: (i) a closed-form solution for the correct re-weights; (ii) justifications for using the surrogate losses; and (iii) a convergence analysis of our method. Experimental results show that our algorithms consistently achieve state-of-the-art performance in accuracy-fairness trade-offs across multiple benchmarks, demonstrating scalability and broad applicability compared to existing methods.
Keywords: Artificial intelligence; Distributionally robust optimization; Group fairness; In-processing; Trustworthy.
Copyright © 2024. Published by Elsevier Ltd.