Full Swap Regret and Discretized Calibration

We study the problem of minimizing swap regret in structured normal-form games. Players have a very large (potentially infinite) number of pure actions, but each action has an embedding into -dimensional space and payoffs are given by bilinear functions of these embeddings. We provide an efficient learning algorithm for this setting that incurs at most swap regret after rounds.To achieve this, we introduce a new online learning problem we call \emph{full swap regret minimization}. In this problem, a learner repeatedly takes a (randomized) action in a bounded convex -dimensional action set and then receives a loss from the adversary, with the goal of minimizing their regret with respect to the \emph{worst-case} swap function mapping to . For varied assumptions about the convexity and smoothness of the loss functions, we design algorithms with full swap regret bounds ranging from to .Finally, we apply these tools to the problem of online forecasting to minimize calibration error, showing that several notions of calibration can be viewed as specific instances of full swap regret. In particular, we design efficient algorithms for online forecasting that guarantee at most -calibration error and \emph{discretized-calibration} error (when the forecaster is restricted to predicting multiples of ).
View on arXiv@article{fishelson2025_2502.09332, title={ Full Swap Regret and Discretized Calibration }, author={ Maxwell Fishelson and Robert Kleinberg and Princewill Okoroafor and Renato Paes Leme and Jon Schneider and Yifeng Teng }, journal={arXiv preprint arXiv:2502.09332}, year={ 2025 } }