ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.10888
107
0

Lattice Climber Attack: Adversarial attacks for randomized mixtures of classifiers

12 June 2025
Lucas Gnecco-Heredia
Benjamin Négrevergne
Y. Chevaleyre
    AAML
ArXiv (abs)PDFHTML
Main:15 Pages
13 Figures
Bibliography:2 Pages
4 Tables
Appendix:13 Pages
Abstract

Finite mixtures of classifiers (a.k.a. randomized ensembles) have been proposed as a way to improve robustness against adversarial attacks. However, existing attacks have been shown to not suit this kind of classifier. In this paper, we discuss the problem of attacking a mixture in a principled way and introduce two desirable properties of attacks based on a geometrical analysis of the problem (effectiveness and maximality). We then show that existing attacks do not meet both of these properties. Finally, we introduce a new attack called {\em lattice climber attack} with theoretical guarantees in the binary linear setting, and demonstrate its performance by conducting experiments on synthetic and real datasets.

View on arXiv
@article{gnecco-heredia2025_2506.10888,
  title={ Lattice Climber Attack: Adversarial attacks for randomized mixtures of classifiers },
  author={ Lucas Gnecco-Heredia and Benjamin Negrevergne and Yann Chevaleyre },
  journal={arXiv preprint arXiv:2506.10888},
  year={ 2025 }
}
Comments on this paper