A statistical analysis of probabilistic counting algorithms

A data stream is a transiently observed sequence of data elements that arrive unordered and with repetitions. Interest lies in developing techniques capable of handling massive data streams, for example, streams of Internet traffic on routers, where, due to constraints on time and storage space, only relatively small subsets of data can be processed at a time. This article deals with the problem of cardinality estimation of massive data streams, i.e. estimating the number of distinct elements in the stream when it is physically impossible to maintain a comprehensive list of previously observed data elements. We focus on two approaches that have been suggested in the literature, which involve indirect record keeping using pseudo-random variates and storing either selected order statistics or random projections. We consider maximum likelihood estimation in both cases and show that the estimators have comparable asymptotic efficiency. We explain why these efficiencies are similar by demonstrating an unexpected connection between the two approaches. Finally we apply our methods to three datasets.
View on arXiv