11
0

Underwater Diffusion Attention Network with Contrastive Language-Image Joint Learning for Underwater Image Enhancement

Main:26 Pages
14 Figures
Bibliography:13 Pages
2 Tables
Abstract

Underwater images are often affected by complex degradations such as light absorption, scattering, color casts, and artifacts, making enhancement critical for effective object detection, recognition, and scene understanding in aquatic environments. Existing methods, especially diffusion-based approaches, typically rely on synthetic paired datasets due to the scarcity of real underwater references, introducing bias and limiting generalization. Furthermore, fine-tuning these models can degrade learned priors, resulting in unrealistic enhancements due to domain shifts. To address these challenges, we propose UDAN-CLIP, an image-to-image diffusion framework pre-trained on synthetic underwater datasets and enhanced with a customized classifier based on vision-language model, a spatial attention module, and a novel CLIP-Diffusion loss. The classifier preserves natural in-air priors and semantically guides the diffusion process, while the spatial attention module focuses on correcting localized degradations such as haze and low contrast. The proposed CLIP-Diffusion loss further strengthens visual-textual alignment and helps maintain semantic consistency during enhancement. The proposed contributions empower our UDAN-CLIP model to perform more effective underwater image enhancement, producing results that are not only visually compelling but also more realistic and detail-preserving. These improvements are consistently validated through both quantitative metrics and qualitative visual comparisons, demonstrating the model's ability to correct distortions and restore natural appearance in challenging underwater conditions.

View on arXiv
@article{shaahid2025_2505.19895,
  title={ Underwater Diffusion Attention Network with Contrastive Language-Image Joint Learning for Underwater Image Enhancement },
  author={ Afrah Shaahid and Muzammil Behzad },
  journal={arXiv preprint arXiv:2505.19895},
  year={ 2025 }
}
Comments on this paper