ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00325
29
0

Towards Effective and Efficient Adversarial Defense with Diffusion Models for Robust Visual Tracking

31 May 2025
Long Xu
Peng Gao
Wen-Jia Tang
Fei Wang
Ru-Yue Yuan
    DiffMAAML
ArXiv (abs)PDFHTML
Main:19 Pages
9 Figures
Bibliography:3 Pages
5 Tables
Abstract

Although deep learning-based visual tracking methods have made significant progress, they exhibit vulnerabilities when facing carefully designed adversarial attacks, which can lead to a sharp decline in tracking performance. To address this issue, this paper proposes for the first time a novel adversarial defense method based on denoise diffusion probabilistic models, termed DiffDf, aimed at effectively improving the robustness of existing visual tracking methods against adversarial attacks. DiffDf establishes a multi-scale defense mechanism by combining pixel-level reconstruction loss, semantic consistency loss, and structural similarity loss, effectively suppressing adversarial perturbations through a gradual denoising process. Extensive experimental results on several mainstream datasets show that the DiffDf method demonstrates excellent generalization performance for trackers with different architectures, significantly improving various evaluation metrics while achieving real-time inference speeds of over 30 FPS, showcasing outstanding defense performance and efficiency. Codes are available atthis https URL.

View on arXiv
@article{xu2025_2506.00325,
  title={ Towards Effective and Efficient Adversarial Defense with Diffusion Models for Robust Visual Tracking },
  author={ Long Xu and Peng Gao and Wen-Jia Tang and Fei Wang and Ru-Yue Yuan },
  journal={arXiv preprint arXiv:2506.00325},
  year={ 2025 }
}
Comments on this paper