ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.00690
34
0

Beyond Minimax Rates in Group Distributionally Robust Optimization via a Novel Notion of Sparsity

1 October 2024
Quan Nguyen
Nishant A. Mehta
Cristóbal Guzmán
ArXivPDFHTML
Abstract

The minimax sample complexity of group distributionally robust optimization (GDRO) has been determined up to a log⁡(K)\log(K)log(K) factor, where KKK is the number of groups. In this work, we venture beyond the minimax perspective via a novel notion of sparsity that we dub (λ,β)(\lambda, \beta)(λ,β)-sparsity. In short, this condition means that at any parameter θ\thetaθ, there is a set of at most β\betaβ groups whose risks at θ\thetaθ all are at least λ\lambdaλ larger than the risks of the other groups. To find an ϵ\epsilonϵ-optimal θ\thetaθ, we show via a novel algorithm and analysis that the ϵ\epsilonϵ-dependent term in the sample complexity can swap a linear dependence on KKK for a linear dependence on the potentially much smaller β\betaβ. This improvement leverages recent progress in sleeping bandits, showing a fundamental connection between the two-player zero-sum game optimization framework for GDRO and per-action regret bounds in sleeping bandits. We next show an adaptive algorithm which, up to log factors, gets a sample complexity bound that adapts to the best (λ,β)(\lambda, \beta)(λ,β)-sparsity condition that holds. We also show how to get a dimension-free semi-adaptive sample complexity bound with a computationally efficient method. Finally, we demonstrate the practicality of the (λ,β)(\lambda, \beta)(λ,β)-sparsity condition and the improved sample efficiency of our algorithms on both synthetic and real-life datasets.

View on arXiv
Comments on this paper