40
2

Killing Four Birds with one Gaussian Process: Analyzing Test-Time Attack Vectors on Classification

Abstract

The wide usage of Machine Learning (ML) leads to direct security threats, as ML algorithms are vulnerable to a plethora of attacks themselves. Different attack vectors are known, and target for example the training phase using manipulated data. Alternatively, they take place at test time and aim for miss-classification, the leakage of the training data or extraction of the model. Previous works studied different test time attacks individually. We show that using an ML model enabling formal analysis and allowing control over the decision surface curvature, interesting insights can be gained when attack vectors are not studied in isolation but in relation to each pother. We show for example how we can secure Gaussian Process Classification against empirical membership inference by properly configuring the algorithm. In this configuration, however, the model's parameters are leaked. This allows an analytic computation of the the training data, which is thus leaked, against the original intention of protecting the data. We extend our study to evasion attacks, and find that analogously, hardening the model against one attack boils down to enabling a different attacker.

View on arXiv
Comments on this paper