22
13

Optimal quantum mixing for slowly evolving sequences of Markov chains

Abstract

In this work we consider the problem of preparation of the stationary distribution of irreducible, time-reversible Markov chains, which is a fundamental task in algorithmic Markov chain theory. For the classical setting, this task has a complexity lower bound of Ω(δ1)\Omega(\delta^{-1}), where δ\delta is the spectral gap of the Markov chain, and other dependencies contribute only logarithmically. In the quantum case, the conjectured complexity is O(δ1)O(\sqrt{\delta^{-1}}), with other dependencies contributing only logarithmically. However, this bound has only been achieved for a few special classes of Markov chains. In this work, we provide a method for the sequential preparation of stationary distributions for sequences of time-reversible NN-state Markov chains, akin to the setting of simulated annealing methods. The complexity of preparation we achieve is O(δ1N4)O(\sqrt{\delta^{-1}} \sqrt[4]{N}), neglecting logarithmic factors. While this result falls short of the conjectured optimal time, it provides a quadratic improvement over na\"{i}ve approaches. Moreover, for the case when the output distributions are required to be encoded in pure quantum states we identify the settings where our algorithm is strictly optimal. The settings of slowly evolving sequences of Markov chains naturally appear in reinforcement learning, and consequently our results can be readily applied in quantum machine learning as well.

View on arXiv
Comments on this paper