Finite mixtures of classifiers (a.k.a. randomized ensembles) have been proposed as a way to improve robustness against adversarial attacks. However, existing attacks have been shown to not suit this kind of classifier. In this paper, we discuss the problem of attacking a mixture in a principled way and introduce two desirable properties of attacks based on a geometrical analysis of the problem (effectiveness and maximality). We then show that existing attacks do not meet both of these properties. Finally, we introduce a new attack called {\em lattice climber attack} with theoretical guarantees in the binary linear setting, and demonstrate its performance by conducting experiments on synthetic and real datasets.
View on arXiv@article{gnecco-heredia2025_2506.10888, title={ Lattice Climber Attack: Adversarial attacks for randomized mixtures of classifiers }, author={ Lucas Gnecco-Heredia and Benjamin Negrevergne and Yann Chevaleyre }, journal={arXiv preprint arXiv:2506.10888}, year={ 2025 } }