Towards a Universal Image Degradation Model via Content-Degradation Disentanglement

Image degradation synthesis is highly desirable in a wide variety of applications ranging from image restoration to simulating artistic effects. Existing models are designed to generate one specific or a narrow set of degradations, which often require user-provided degradation parameters. As a result, they lack the generalizability to synthesize degradations beyond their initial design or adapt to other applications. Here we propose the first universal degradation model that can synthesize a broad spectrum of complex and realistic degradations containing both homogeneous (global) and inhomogeneous (spatially varying) components. Our model automatically extracts and disentangles homogeneous and inhomogeneous degradation features, which are later used for degradation synthesis without user intervention. A disentangle-by-compression method is proposed to separate degradation information from images. Two novel modules for extracting and incorporating inhomogeneous degradations are created to model inhomogeneous components in complex degradations. We demonstrate the model's accuracy and adaptability in film-grain simulation and blind image restoration tasks. The demo video, code, and dataset of this project will be released upon publication atthis http URL.
View on arXiv@article{yang2025_2505.12860, title={ Towards a Universal Image Degradation Model via Content-Degradation Disentanglement }, author={ Wenbo Yang and Zhongling Wang and Zhou Wang }, journal={arXiv preprint arXiv:2505.12860}, year={ 2025 } }