22
0

Proper measure for adversarial robustness

Abstract

This paper analyzes the problems of adversarial accuracy and adversarial training. We argue that standard adversarial accuracy fails to properly measure the robustness of classifiers. In order to handle the problems of the standard adversarial accuracy, we introduce a new measure for the robustness of classifiers called genuine adversarial accuracy. It can measure adversarial robustness of classifiers without trading off accuracy on clean data and accuracy on the adversarially perturbed samples. In addition, it does not favor a model with invariance-based adversarial examples, samples whose predicted classes are unchanged even if the perceptual classes are changed. We prove that a single nearest neighbor (1-NN) classifier is the most robust classifier according to genuine adversarial accuracy for given data and a distance metric when the class for each data point is unique. Based on this result, we suggest that using poor distance metric might be the reason for the tradeoff between test accuracy and lpl_p norm-based test adversarial robustness. Codes for experiments and projections for genuine adversarial accuracy are available at https://github.com/hjk92g/proper_measure_robustness.

View on arXiv
Comments on this paper