ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.10164
34
17

Seasoning Model Soups for Robustness to Adversarial and Natural Distribution Shifts

20 February 2023
Francesco Croce
Sylvestre-Alvise Rebuffi
Evan Shelhamer
Sven Gowal
    AAML
ArXivPDFHTML
Abstract

Adversarial training is widely used to make classifiers robust to a specific threat or adversary, such as ℓp\ell_pℓp​-norm bounded perturbations of a given ppp-norm. However, existing methods for training classifiers robust to multiple threats require knowledge of all attacks during training and remain vulnerable to unseen distribution shifts. In this work, we describe how to obtain adversarially-robust model soups (i.e., linear combinations of parameters) that smoothly trade-off robustness to different ℓp\ell_pℓp​-norm bounded adversaries. We demonstrate that such soups allow us to control the type and level of robustness, and can achieve robustness to all threats without jointly training on all of them. In some cases, the resulting model soups are more robust to a given ℓp\ell_pℓp​-norm adversary than the constituent model specialized against that same adversary. Finally, we show that adversarially-robust model soups can be a viable tool to adapt to distribution shifts from a few examples.

View on arXiv
Comments on this paper