ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.01704
15
6

Adequacy of the Gradient-Descent Method for Classifier Evasion Attacks

6 April 2017
Yi Han
Benjamin I. P. Rubinstein
    SILM
    AAML
ArXivPDFHTML
Abstract

Despite the wide use of machine learning in adversarial settings including computer security, recent studies have demonstrated vulnerabilities to evasion attacks---carefully crafted adversarial samples that closely resemble legitimate instances, but cause misclassification. In this paper, we examine the adequacy of the leading approach to generating adversarial samples---the gradient descent approach. In particular (1) we perform extensive experiments on three datasets, MNIST, USPS and Spambase, in order to analyse the effectiveness of the gradient-descent method against non-linear support vector machines, and conclude that carefully reduced kernel smoothness can significantly increase robustness to the attack; (2) we demonstrate that separated inter-class support vectors lead to more secure models, and propose a quantity similar to margin that can efficiently predict potential susceptibility to gradient-descent attacks, before the attack is launched; and (3) we design a new adversarial sample construction algorithm based on optimising the multiplicative ratio of class decision functions.

View on arXiv
Comments on this paper