We study the online calibration of multi-dimensional forecasts over an arbitrary convex set relative to an arbitrary norm . We connect this with the problem of external regret minimization for online linear optimization, showing that if it is possible to guarantee worst-case regret after rounds when actions are drawn from and losses are drawn from the dual unit norm ball, then it is also possible to obtain -calibrated forecasts after rounds. When is the -dimensional simplex and is the -norm, the existence of -regret algorithms for learning with experts implies that it is possible to obtain -calibrated forecasts after rounds, recovering a recent result of Peng (2025).Interestingly, our algorithm obtains this guarantee without requiring access to any online linear optimization subroutine or knowledge of the optimal rate -- in fact, our algorithm is identical for every setting of and . Instead, we show that the optimal regularizer for the above OLO problem can be used to upper bound the above calibration error by a swap regret, which we then minimize by running the recent TreeSwap algorithm with Follow-The-Leader as a subroutine.Finally, we prove that any online calibration algorithm that guarantees -calibration error over the -dimensional simplex requires (assuming ). This strengthens the corresponding lower bound of Peng, and shows that an exponential dependence on is necessary.
View on arXiv@article{fishelson2025_2505.21460, title={ High-Dimensional Calibration from Swap Regret }, author={ Maxwell Fishelson and Noah Golowich and Mehryar Mohri and Jon Schneider }, journal={arXiv preprint arXiv:2505.21460}, year={ 2025 } }