ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.20314
147
0

Adversarial Robustness in Parameter-Space Classifiers

27 February 2025
Tamir Shor
Ethan Fetaya
Chaim Baskin
A. Bronstein
    AAML
    OOD
ArXivPDFHTML
Abstract

Implicit Neural Representations (INRs) have been recently garnering increasing interest in various research fields, mainly due to their ability to represent large, complex data in a compact and continuous manner. Past work further showed that numerous popular downstream tasks can be performed directly in the INR parameter-space. Doing so can substantially reduce the computational resources required to process the represented data in their native domain. A major difficulty in using modern machine-learning approaches, is their high susceptibility to adversarial attacks, which have been shown to greatly limit the reliability and applicability of such methods in a wide range of settings. In this work, we show that parameter-space models trained for classification are inherently robust to adversarial attacks -- without the need of any robust training. To support our claims, we develop a novel suite of adversarial attacks targeting parameter-space classifiers, and furthermore analyze practical considerations of attacking parameter-space classifiers.

View on arXiv
@article{shor2025_2502.20314,
  title={ Adversarial Robustness in Parameter-Space Classifiers },
  author={ Tamir Shor and Ethan Fetaya and Chaim Baskin and Alex Bronstein },
  journal={arXiv preprint arXiv:2502.20314},
  year={ 2025 }
}
Comments on this paper