ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13165
87
0

From Zero to Detail: Deconstructing Ultra-High-Definition Image Restoration from Progressive Spectral Perspective

17 March 2025
C. Zhao
Z. Chen
Yunzhe Xu
Enxuan Gu
Jian Li
Zili Yi
Qian Wang
Jian Yang
Ying Tai
ArXivPDFHTML
Abstract

Ultra-high-definition (UHD) image restoration faces significant challenges due to its high resolution, complex content, and intricate details. To cope with these challenges, we analyze the restoration process in depth through a progressive spectral perspective, and deconstruct the complex UHD restoration problem into three progressive stages: zero-frequency enhancement, low-frequency restoration, and high-frequency refinement. Building on this insight, we propose a novel framework, ERR, which comprises three collaborative sub-networks: the zero-frequency enhancer (ZFE), the low-frequency restorer (LFR), and the high-frequency refiner (HFR). Specifically, the ZFE integrates global priors to learn global mapping, while the LFR restores low-frequency information, emphasizing reconstruction of coarse-grained content. Finally, the HFR employs our designed frequency-windowed kolmogorov-arnold networks (FW-KAN) to refine textures and details, producing high-quality image restoration. Our approach significantly outperforms previous UHD methods across various tasks, with extensive ablation studies validating the effectiveness of each component. The code is available at \href{this https URL}{here}.

View on arXiv
@article{zhao2025_2503.13165,
  title={ From Zero to Detail: Deconstructing Ultra-High-Definition Image Restoration from Progressive Spectral Perspective },
  author={ Chen Zhao and Zhizhou Chen and Yunzhe Xu and Enxuan Gu and Jian Li and Zili Yi and Qian Wang and Jian Yang and Ying Tai },
  journal={arXiv preprint arXiv:2503.13165},
  year={ 2025 }
}
Comments on this paper