A Theoretical Framework for Robustness of (Deep) Classifiers Under Adversarial Noise
- AAML

Recent literature has pointed out that machine learning classifiers, including deep neural networks (DNN), are vulnerable to adversarial samples that are maliciously created inputs that force a machine learning classifier to produce wrong output labels. Multiple studies have tried to analyze and thus harden machine classifiers under such adversarial noise (AN). However, they are mostly empirical and provide little understanding of the underlying principles that enable evaluation of the robustness of a classier against AN. This paper proposes a unified framework using two metric spaces to evaluate classifiers' robustness against AN and provides general guidance for hardening such classifiers. The central idea of our work is that for a certain classification task, the robustness of a classifier against AN is decided by both and its oracle (like human annotator of that specific task). In particular: (1) By adding oracle into the framework, we provide a general definition of the adversarial sample problem. (2) We theoretically formulate a definition that decides whether a classifier is always robust against AN (strong-robustness); (3) Using two metric spaces () and () defined by and respectively, we prove that the topological equivalence between () and () is sufficient in deciding whether is strong-robust at test time, or not; (5) By training a DNN classifier using the Siamese architecture, we propose a new defense strategy "Siamese training" to intuitively approach topological equivalence between () and (). Experimental results show that Siamese training helps multiple DNN models achieve better accuracy compared to previous defense strategies in an adversarial setting. DNN models after Siamese training exhibit better robustness than the state-of-the-art baselines.
View on arXiv