14
48

Can Adversarially Robust Learning Leverage Computational Hardness?

Abstract

Making learners robust to adversarial perturbation at test time (i.e., evasion attacks) or training time (i.e., poisoning attacks) has emerged as a challenging task. It is known that for some natural settings, sublinear perturbations in the training phase or the testing phase can drastically decrease the quality of the predictions. These negative results, however, are information theoretic and only prove the existence of such successful adversarial perturbations. A natural question for these settings is whether or not we can make classifiers computationally robust to polynomial-time attacks. In this work, we prove strong barriers against achieving such envisioned computational robustness both for evasion and poisoning attacks. In particular, we show that if the test instances come from a product distribution (e.g., uniform over {0,1}n\{0,1\}^n or [0,1]n[0,1]^n, or isotropic nn-variate Gaussian) and that there is an initial constant error, then there exists a polynomial-time attack that finds adversarial examples of Hamming distance O(n)O(\sqrt n). For poisoning attacks, we prove that for any learning algorithm with sample complexity mm and any efficiently computable "predicate" defining some "bad" property BB for the produced hypothesis (e.g., failing on a particular test) that happens with an initial constant probability, there exist polynomial-time online poisoning attacks that tamper with O(m)O (\sqrt m) many examples, replace them with other correctly labeled examples, and increases the probability of the bad event BB to 1\approx 1. Both of our poisoning and evasion attacks are black-box in how they access their corresponding components of the system (i.e., the hypothesis, the concept, and the learning algorithm) and make no further assumptions about the classifier or the learning algorithm producing the classifier.

View on arXiv
Comments on this paper