ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.11876
13
21

Adversarial Robustness Guarantees for Classification with Gaussian Processes

28 May 2019
Arno Blaas
A. Patané
Luca Laurenti
L. Cardelli
Marta Z. Kwiatkowska
Stephen J. Roberts
    GP
    AAML
ArXivPDFHTML
Abstract

We investigate adversarial robustness of Gaussian Process Classification (GPC) models. Given a compact subset of the input space T⊆RdT\subseteq \mathbb{R}^dT⊆Rd enclosing a test point x∗x^*x∗ and a GPC trained on a dataset D\mathcal{D}D, we aim to compute the minimum and the maximum classification probability for the GPC over all the points in TTT. In order to do so, we show how functions lower- and upper-bounding the GPC output in TTT can be derived, and implement those in a branch and bound optimisation algorithm. For any error threshold ϵ>0\epsilon > 0ϵ>0 selected a priori, we show that our algorithm is guaranteed to reach values ϵ\epsilonϵ-close to the actual values in finitely many iterations. We apply our method to investigate the robustness of GPC models on a 2D synthetic dataset, the SPAM dataset and a subset of the MNIST dataset, providing comparisons of different GPC training techniques, and show how our method can be used for interpretability analysis. Our empirical analysis suggests that GPC robustness increases with more accurate posterior estimation.

View on arXiv
Comments on this paper