ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07804
12
1

Enhancing Adversarial Robustness with Conformal Prediction: A Framework for Guaranteed Model Reliability

9 June 2025
Jie Bao
Chuangyin Dang
Rui Luo
Hanwei Zhang
Zhixin Zhou
    AAML
ArXiv (abs)PDFHTML
Main:9 Pages
4 Figures
Bibliography:3 Pages
10 Tables
Appendix:8 Pages
Abstract

As deep learning models are increasingly deployed in high-risk applications, robust defenses against adversarial attacks and reliable performance guarantees become paramount. Moreover, accuracy alone does not provide sufficient assurance or reliable uncertainty estimates for these models. This study advances adversarial training by leveraging principles from Conformal Prediction. Specifically, we develop an adversarial attack method, termed OPSA (OPtimal Size Attack), designed to reduce the efficiency of conformal prediction at any significance level by maximizing model uncertainty without requiring coverage guarantees. Correspondingly, we introduce OPSA-AT (Adversarial Training), a defense strategy that integrates OPSA within a novel conformal training paradigm. Experimental evaluations demonstrate that our OPSA attack method induces greater uncertainty compared to baseline approaches for various defenses. Conversely, our OPSA-AT defensive model significantly enhances robustness not only against OPSA but also other adversarial attacks, and maintains reliable prediction. Our findings highlight the effectiveness of this integrated approach for developing trustworthy and resilient deep learning models for safety-critical domains. Our code is available atthis https URL.

View on arXiv
@article{bao2025_2506.07804,
  title={ Enhancing Adversarial Robustness with Conformal Prediction: A Framework for Guaranteed Model Reliability },
  author={ Jie Bao and Chuangyin Dang and Rui Luo and Hanwei Zhang and Zhixin Zhou },
  journal={arXiv preprint arXiv:2506.07804},
  year={ 2025 }
}
Comments on this paper