369
v1v2v3v4 (latest)

Fairness-aware Federated Minimax Optimization with Convergence Guarantee

Conference on Algebraic Informatics (CAI), 2023
Main:9 Pages
2 Figures
Bibliography:1 Pages
3 Tables
Abstract

Federated learning (FL) has garnered considerable attention due to its privacy-preserving feature. Nonetheless, the lack of freedom in managing user data can lead to group fairness issues, where models are biased towards sensitive factors such as race or gender. To tackle this issue, this paper proposes a novel algorithm, fair federated averaging with augmented Lagrangian method (FFALM), designed explicitly to address group fairness issues in FL. Specifically, we impose a fairness constraint on the training objective and solve the minimax reformulation of the constrained optimization problem. Then, we derive the theoretical upper bound for the convergence rate of FFALM. The effectiveness of FFALM in improving fairness is shown empirically on CelebA and UTKFace datasets in the presence of severe statistical heterogeneity.

View on arXiv
Comments on this paper