ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.15660
32
0

Low-Light Video Enhancement via Spatial-Temporal Consistent Decomposition

24 May 2024
Xiaogang Xu
Kun Zhou
Tao Hu
Ruixing Wang
Hujun Bao
Hao Peng
Bei Yu
ArXivPDFHTML
Abstract

Low-Light Video Enhancement (LLVE) seeks to restore dynamic or static scenes plagued by severe invisibility and noise. In this paper, we present an innovative video decomposition strategy that incorporates view-independent and view-dependent components to enhance the performance of LLVE. We leverage dynamic cross-frame correspondences for the view-independent term (which primarily captures intrinsic appearance) and impose a scene-level continuity constraint on the view-dependent term (which mainly describes the shading condition) to achieve consistent and satisfactory decomposition results. To further ensure consistent decomposition, we introduce a dual-structure enhancement network featuring a cross-frame interaction mechanism. By supervising different frames simultaneously, this network encourages them to exhibit matching decomposition features. This mechanism can seamlessly integrate with encoder-decoder single-frame networks, incurring minimal additional parameter costs. Extensive experiments are conducted on widely recognized LLVE benchmarks, covering diverse scenarios. Our framework consistently outperforms existing methods, establishing a new SOTA performance.

View on arXiv
@article{xu2025_2405.15660,
  title={ Low-Light Video Enhancement via Spatial-Temporal Consistent Decomposition },
  author={ Xiaogang Xu and Kun Zhou and Tao Hu and Jiafei Wu and Ruixing Wang and Hao Peng and Bei Yu },
  journal={arXiv preprint arXiv:2405.15660},
  year={ 2025 }
}
Comments on this paper