LIVEJoin the current RTAI Connect sessionJoin now

72
107

Adversarial Robustness May Be at Odds With Simplicity

Abstract

Current techniques in machine learning are so far are unable to learn classifiers that are robust to adversarial perturbations. However, they are able to learn non-robust classifiers with very high accuracy, even in the presence of random perturbations. Towards explaining this gap, we highlight the hypothesis that robust classification may require more complex classifiers (i.e. more capacity) than standard classification.\textit{robust classification may require more complex classifiers (i.e. more capacity) than standard classification.} In this note, we show that this hypothesis is indeed possible, by giving several theoretical examples of classification tasks and sets of "simple" classifiers for which: (1) There exists a simple classifier with high standard accuracy, and also high accuracy under random \ell_\infty noise. (2) Any simple classifier is not robust: it must have high adversarial loss with \ell_\infty perturbations. (3) Robust classification is possible, but only with more complex classifiers (exponentially more complex, in some examples). Moreover, there is a quantitative trade-off between robustness and standard accuracy among simple classifiers.\textit{there is a quantitative trade-off between robustness and standard accuracy among simple classifiers.} This suggests an alternate explanation of this phenomenon, which appears in practice: the tradeoff may occur not because the classification task inherently requires such a tradeoff (as in [Tsipras-Santurkar-Engstrom-Turner-Madry `18]), but because the structure of our current classifiers imposes such a tradeoff.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.