LIVEJoin the current RTAI Connect sessionJoin now

13
2

Uncertainty about Uncertainty: Optimal Adaptive Algorithms for Estimating Mixtures of Unknown Coins

Abstract

Given a mixture between two populations of coins, "positive" coins that each have -- unknown and potentially different -- bias 12+Δ\geq\frac{1}{2}+\Delta and "negative" coins with bias 12Δ\leq\frac{1}{2}-\Delta, we consider the task of estimating the fraction ρ\rho of positive coins to within additive error ϵ\epsilon. We achieve an upper and lower bound of Θ(ρϵ2Δ2log1δ)\Theta(\frac{\rho}{\epsilon^2\Delta^2}\log\frac{1}{\delta}) samples for a 1δ1-\delta probability of success, where crucially, our lower bound applies to all fully-adaptive algorithms. Thus, our sample complexity bounds have tight dependence for every relevant problem parameter. A crucial component of our lower bound proof is a decomposition lemma (see Lemmas 17 and 18) showing how to assemble partially-adaptive bounds into a fully-adaptive bound, which may be of independent interest: though we invoke it for the special case of Bernoulli random variables (coins), it applies to general distributions. We present simulation results to demonstrate the practical efficacy of our approach for realistic problem parameters for crowdsourcing applications, focusing on the "rare events" regime where ρ\rho is small. The fine-grained adaptive flavor of both our algorithm and lower bound contrasts with much previous work in distributional testing and learning.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.