123
205

Evasion and Hardening of Tree Ensemble Classifiers

Abstract

Recent work has successfully constructed adversarial "evading" instances for differentiable prediction models. However generating adversarial instances for tree ensembles, a piecewise constant class of models, has remained an open problem. In this paper, we construct both exact and approximate evasion algorithms for tree ensembles: for a given instance x we find the "nearest" instance x' such that the classifier predictions of x and x' are different. First, we show that finding such instances is practically possible despite tree ensemble models being non-differentiable and the optimal evasion problem being NP-hard. In addition, we quantify the susceptibility of such models applied to the task of recognizing handwritten digits by measuring the distance between the original instance and the modified instance under the L0, L1, L2 and L-infinity norms. We also analyze a wide variety of classifiers including linear and RBF-kernel models, max-ensemble of linear models, and neural networks for comparison purposes. Our analysis shows that tree ensembles produced by a state-of-the-art gradient boosting method are consistently the least robust models notwithstanding their competitive accuracy. Finally, we show that a sufficient number of retraining rounds with L0-adversarial instances makes the hardened model three times harder to evade. This retraining set also marginally improves classification accuracy, but simultaneously makes the model more susceptible to L1, L2 and L-infinity evasions.

View on arXiv
Comments on this paper