We propose the use of dimensionality reduction as a defense against evasion attacks on ML classifiers. We present and investigate a strategy for incorporating dimensionality reduction via Principal Component Analysis to enhance the resilience of machine learning, targeting both the classification and the training phase. We empirically evaluate and demonstrate the feasibility of dimensionality reduction of data as a defense mechanism against evasion attacks using multiple real-world datasets. Our key findings are that the defenses are (i) effective against strategic evasion attacks in the literature, increasing the resources required by an adversary for a successful attack by a factor of about two, (ii) applicable across a range of ML classifiers, including Support Vector Machines and Deep Neural Networks, and (iii) generalizable to multiple application domains, including image classification, malware classification, and human activity classification.
View on arXiv