ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.10616
103
0

Non-stationary Online Learning for Curved Losses: Improved Dynamic Regret via Mixability

12 June 2025
Yu Zhang
Peng Zhao
Masashi Sugiyama
ArXiv (abs)PDFHTML
Main:9 Pages
1 Figures
Bibliography:2 Pages
1 Tables
Appendix:14 Pages
Abstract

Non-stationary online learning has drawn much attention in recent years. Despite considerable progress, dynamic regret minimization has primarily focused on convex functions, leaving the functions with stronger curvature (e.g., squared or logistic loss) underexplored. In this work, we address this gap by showing that the regret can be substantially improved by leveraging the concept of mixability, a property that generalizes exp-concavity to effectively capture loss curvature. Let ddd denote the dimensionality and PTP_TPT​ the path length of comparators that reflects the environmental non-stationarity. We demonstrate that an exponential-weight method with fixed-share updates achieves an O(dT1/3PT2/3log⁡T)\mathcal{O}(d T^{1/3} P_T^{2/3} \log T)O(dT1/3PT2/3​logT) dynamic regret for mixable losses, improving upon the best-known O(d10/3T1/3PT2/3log⁡T)\mathcal{O}(d^{10/3} T^{1/3} P_T^{2/3} \log T)O(d10/3T1/3PT2/3​logT) result (Baby and Wang, 2021) in ddd. More importantly, this improvement arises from a simple yet powerful analytical framework that exploits the mixability, which avoids the Karush-Kuhn-Tucker-based analysis required by existing work.

View on arXiv
@article{zhang2025_2506.10616,
  title={ Non-stationary Online Learning for Curved Losses: Improved Dynamic Regret via Mixability },
  author={ Yu-Jie Zhang and Peng Zhao and Masashi Sugiyama },
  journal={arXiv preprint arXiv:2506.10616},
  year={ 2025 }
}
Comments on this paper