110

CountSketches, Feature Hashing and the Median of Three

International Conference on Machine Learning (ICML), 2021
Abstract

In this paper, we revisit the classic CountSketch method, which is a sparse, random projection that transforms a (high-dimensional) Euclidean vector vv to a vector of dimension (2t1)s(2t-1) s, where t,s>0t, s > 0 are integer parameters. It is known that even for t=1t=1, a CountSketch allows estimating coordinates of vv with variance bounded by v22/s\|v\|_2^2/s. For t>1t > 1, the estimator takes the median of 2t12t-1 independent estimates, and the probability that the estimate is off by more than 2v2/s2 \|v\|_2/\sqrt{s} is exponentially small in tt. This suggests choosing tt to be logarithmic in a desired inverse failure probability. However, implementations of CountSketch often use a small, constant tt. Previous work only predicts a constant factor improvement in this setting. Our main contribution is a new analysis of Count-Sketch, showing an improvement in variance to O(min{v12/s2,v22/s})O(\min\{\|v\|_1^2/s^2,\|v\|_2^2/s\}) when t>1t > 1. That is, the variance decreases proportionally to s2s^{-2}, asymptotically for large enough ss. We also study the variance in the setting where an inner product is to be estimated from two CountSketches. This finding suggests that the Feature Hashing method, which is essentially identical to CountSketch but does not make use of the median estimator, can be made more reliable at a small cost in settings where using a median estimator is possible. We confirm our theoretical findings in experiments and thereby help justify why a small constant number of estimates often suffice in practice. Our improved variance bounds are based on new general theorems about the variance and higher moments of the median of i.i.d. random variables that may be of independent interest.

View on arXiv
Comments on this paper