35
0

Adversarially Robust Learning with Optimal Transport Regularized Divergences

Abstract

We introduce a new class of optimal-transport-regularized divergences, DcD^c, constructed via an infimal convolution between an information divergence, DD, and an optimal-transport (OT) cost, CC, and study their use in distributionally robust optimization (DRO). In particular, we propose the ARMORDARMOR_D methods as novel approaches to enhancing the adversarial robustness of deep learning models. These DRO-based methods are defined by minimizing the maximum expected loss over a DcD^c-neighborhood of the empirical distribution of the training data. Viewed as a tool for constructing adversarial samples, our method allows samples to be both transported, according to the OT cost, and re-weighted, according to the information divergence; the addition of a principled and dynamical adversarial re-weighting on top of adversarial sample transport is a key innovation of ARMORDARMOR_D. ARMORDARMOR_D can be viewed as a generalization of the best-performing loss functions and OT costs in the adversarial training literature; we demonstrate this flexibility by using ARMORDARMOR_D to augment the UDR, TRADES, and MART methods and obtain improved performance on CIFAR-10 and CIFAR-100 image recognition. Specifically, augmenting with ARMORDARMOR_D leads to 1.9\% and 2.1\% improvement against AutoAttack, a powerful ensemble of adversarial attacks, on CIFAR-10 and CIFAR-100 respectively. To foster reproducibility, we made the code accessible atthis https URL.

View on arXiv
@article{birrell2025_2309.03791,
  title={ Adversarially Robust Learning with Optimal Transport Regularized Divergences },
  author={ Jeremiah Birrell and Reza Ebrahimi },
  journal={arXiv preprint arXiv:2309.03791},
  year={ 2025 }
}
Comments on this paper