14
215

q-means: A quantum algorithm for unsupervised machine learning

Abstract

Quantum machine learning is one of the most promising applications of a full-scale quantum computer. Over the past few years, many quantum machine learning algorithms have been proposed that can potentially offer considerable speedups over the corresponding classical algorithms. In this paper, we introduce q-means, a new quantum algorithm for clustering which is a canonical problem in unsupervised machine learning. The qq-means algorithm has convergence and precision guarantees similar to kk-means, and it outputs with high probability a good approximation of the kk cluster centroids like the classical algorithm. Given a dataset of NN dd-dimensional vectors viv_i (seen as a matrix VRN×d)V \in \mathbb{R}^{N \times d}) stored in QRAM, the running time of q-means is O~(kdηδ2κ(V)(μ(V)+kηδ)+k2η1.5δ2κ(V)μ(V))\widetilde{O}\left( k d \frac{\eta}{\delta^2}\kappa(V)(\mu(V) + k \frac{\eta}{\delta}) + k^2 \frac{\eta^{1.5}}{\delta^2} \kappa(V)\mu(V) \right) per iteration, where κ(V)\kappa(V) is the condition number, μ(V)\mu(V) is a parameter that appears in quantum linear algebra procedures and η=maxivi2\eta = \max_{i} ||v_{i}||^{2}. For a natural notion of well-clusterable datasets, the running time becomes O~(k2dη2.5δ3+k2.5η2δ3)\widetilde{O}\left( k^2 d \frac{\eta^{2.5}}{\delta^3} + k^{2.5} \frac{\eta^2}{\delta^3} \right) per iteration, which is linear in the number of features dd, and polynomial in the rank kk, the maximum square norm η\eta and the error parameter δ\delta. Both running times are only polylogarithmic in the number of datapoints NN. Our algorithm provides substantial savings compared to the classical kk-means algorithm that runs in time O(kdN)O(kdN) per iteration, particularly for the case of large datasets.

View on arXiv
Comments on this paper