ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.12508
20
18

Adversarial Robustness against Multiple and Single lpl_plp​-Threat Models via Quick Fine-Tuning of Robust Classifiers

26 May 2021
Francesco Croce
Matthias Hein
    OOD
    AAML
ArXivPDFHTML
Abstract

A major drawback of adversarially robust models, in particular for large scale datasets like ImageNet, is the extremely long training time compared to standard ones. Moreover, models should be robust not only to one lpl_plp​-threat model but ideally to all of them. In this paper we propose Extreme norm Adversarial Training (E-AT) for multiple-norm robustness which is based on geometric properties of lpl_plp​-balls. E-AT costs up to three times less than other adversarial training methods for multiple-norm robustness. Using E-AT we show that for ImageNet a single epoch and for CIFAR-10 three epochs are sufficient to turn any lpl_plp​-robust model into a multiple-norm robust model. In this way we get the first multiple-norm robust model for ImageNet and boost the state-of-the-art for multiple-norm robustness to more than 51%51\%51% on CIFAR-10. Finally, we study the general transfer via fine-tuning of adversarial robustness between different individual lpl_plp​-threat models and improve the previous SOTA l1l_1l1​-robustness on both CIFAR-10 and ImageNet. Extensive experiments show that our scheme works across datasets and architectures including vision transformers.

View on arXiv
Comments on this paper