ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1003.2751
46
53

Near-Optimal Evasion of Convex-Inducing Classifiers

14 March 2010
B. Nelson
Benjamin I. P. Rubinstein
Ling Huang
A. Joseph
S. Lau
Steven J. Lee
Satish Rao
Anthony Tran
J. D. Tygar
ArXivPDFHTML
Abstract

Classifiers are often used to detect miscreant activities. We study how an adversary can efficiently query a classifier to elicit information that allows the adversary to evade detection at near-minimal cost. We generalize results of Lowd and Meek (2005) to convex-inducing classifiers. We present algorithms that construct undetected instances of near-minimal cost using only polynomially many queries in the dimension of the space and without reverse engineering the decision boundary.

View on arXiv
Comments on this paper