ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.12630
12
0

Degradation-Aware Feature Perturbation for All-in-One Image Restoration

19 May 2025
Xiangpeng Tian
Xiangyu Liao
Xiao Liu
Meng Li
Chao Ren
    DiffM
ArXivPDFHTML
Abstract

All-in-one image restoration aims to recover clear images from various degradation types and levels with a unified model. Nonetheless, the significant variations among degradation types present challenges for training a universal model, often resulting in task interference, where the gradient update directions of different tasks may diverge due to shared parameters. To address this issue, motivated by the routing strategy, we propose DFPIR, a novel all-in-one image restorer that introduces Degradation-aware Feature Perturbations(DFP) to adjust the feature space to align with the unified parameter space. In this paper, the feature perturbations primarily include channel-wise perturbations and attention-wise perturbations. Specifically, channel-wise perturbations are implemented by shuffling the channels in high-dimensional space guided by degradation types, while attention-wise perturbations are achieved through selective masking in the attention space. To achieve these goals, we propose a Degradation-Guided Perturbation Block (DGPB) to implement these two functions, positioned between the encoding and decoding stages of the encoder-decoder architecture. Extensive experimental results demonstrate that DFPIR achieves state-of-the-art performance on several all-in-one image restoration tasks including image denoising, image dehazing, image deraining, motion deblurring, and low-light image enhancement. Our codes are available atthis https URL.

View on arXiv
@article{tian2025_2505.12630,
  title={ Degradation-Aware Feature Perturbation for All-in-One Image Restoration },
  author={ Xiangpeng Tian and Xiangyu Liao and Xiao Liu and Meng Li and Chao Ren },
  journal={arXiv preprint arXiv:2505.12630},
  year={ 2025 }
}
Comments on this paper