42
2

Minimizing Regret in Bandit Online Optimization in Unconstrained and Constrained Action Spaces

Abstract

We consider online convex optimization with a zero-order oracle feedback. In particular, the decision maker does not know the explicit representation of the time-varying cost functions, or their gradients. At each time step, she observes the value of the cost function evaluated at her chosen action. The objective is to minimize the regret, that is, the difference between the sum of the costs she accumulates and that of the static optimal action had she known the sequence of cost functions a priori. We present a novel algorithm to minimize the regret in both unconstrained and constrained action spaces. Our algorithm hinges on a classical idea of one-point estimation of the gradients of the cost functions based on their observed values. However, our choice of the randomization introduced and consequently the proof techniques differ from those of past work. Letting T denote the number of queries of the zero-order oracle and n the problem dimension, the regret rate achieved is O(nT^{2/3}) for both constrained and unconstrained action spaces. Moreover, we adapt the presented algorithm to the setting with two-point feedback and demonstrate that the adapted procedure achieves the theoretical lower bound on the regret.

View on arXiv
Comments on this paper