ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.07056
10
0

D2R: dual regularization loss with collaborative adversarial generation for model robustness

8 June 2025
Zhenyu Liu
H. Liang
R. Ranjan
Zhanxing Zhu
V. Snás̃el
Varun Ojha
ArXiv (abs)PDFHTML
Main:10 Pages
5 Figures
Bibliography:2 Pages
2 Tables
Abstract

The robustness of Deep Neural Network models is crucial for defending models against adversarial attacks. Recent defense methods have employed collaborative learning frameworks to enhance model robustness. Two key limitations of existing methods are (i) insufficient guidance of the target model via loss functions and (ii) non-collaborative adversarial generation. We, therefore, propose a dual regularization loss (D2R Loss) method and a collaborative adversarial generation (CAG) strategy for adversarial training. D2R loss includes two optimization steps. The adversarial distribution and clean distribution optimizations enhance the target model's robustness by leveraging the strengths of different loss functions obtained via a suitable function space exploration to focus more precisely on the target model's distribution. CAG generates adversarial samples using a gradient-based collaboration between guidance and target models. We conducted extensive experiments on three benchmark databases, including CIFAR-10, CIFAR-100, Tiny ImageNet, and two popular target models, WideResNet34-10 and PreActResNet18. Our results show that D2R loss with CAG produces highly robust models.

View on arXiv
@article{liu2025_2506.07056,
  title={ D2R: dual regularization loss with collaborative adversarial generation for model robustness },
  author={ Zhenyu Liu and Huizhi Liang and Rajiv Ranjan and Zhanxing Zhu and Vaclav Snasel and Varun Ojha },
  journal={arXiv preprint arXiv:2506.07056},
  year={ 2025 }
}
Comments on this paper