ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1607.02834
19
16

Tight Lower Bounds for Multiplicative Weights Algorithmic Families

11 July 2016
N. Gravin
Yuval Peres
Balasubramanian Sivan
ArXivPDFHTML
Abstract

We study the fundamental problem of prediction with expert advice and develop regret lower bounds for a large family of algorithms for this problem. We develop simple adversarial primitives, that lend themselves to various combinations leading to sharp lower bounds for many algorithmic families. We use these primitives to show that the classic Multiplicative Weights Algorithm (MWA) has a regret of Tln⁡k2\sqrt{\frac{T \ln k}{2}}2Tlnk​​, there by completely closing the gap between upper and lower bounds. We further show a regret lower bound of 23Tln⁡k2\frac{2}{3}\sqrt{\frac{T\ln k}{2}}32​2Tlnk​​ for a much more general family of algorithms than MWA, where the learning rate can be arbitrarily varied over time, or even picked from arbitrary distributions over time. We also use our primitives to construct adversaries in the geometric horizon setting for MWA to precisely characterize the regret at 0.391δ\frac{0.391}{\sqrt{\delta}}δ​0.391​ for the case of 222 experts and a lower bound of 12ln⁡k2δ\frac{1}{2}\sqrt{\frac{\ln k}{2\delta}}21​2δlnk​​ for the case of arbitrary number of experts kkk.

View on arXiv
Comments on this paper