ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15673
7
0

UniRelight: Learning Joint Decomposition and Synthesis for Video Relighting

18 June 2025
Kai He
Ruofan Liang
Jacob Munkberg
Jon Hasselgren
Nandita Vijaykumar
Alexander Keller
Sanja Fidler
Igor Gilitschenski
Zan Gojcic
Zian Wang
Author Contacts:
ArXiv (abs)PDFHTML
Main:9 Pages
12 Figures
Bibliography:4 Pages
6 Tables
Appendix:5 Pages
Abstract

We address the challenge of relighting a single image or video, a task that demands precise scene intrinsic understanding and high-quality light transport synthesis. Existing end-to-end relighting models are often limited by the scarcity of paired multi-illumination data, restricting their ability to generalize across diverse scenes. Conversely, two-stage pipelines that combine inverse and forward rendering can mitigate data requirements but are susceptible to error accumulation and often fail to produce realistic outputs under complex lighting conditions or with sophisticated materials. In this work, we introduce a general-purpose approach that jointly estimates albedo and synthesizes relit outputs in a single pass, harnessing the generative capabilities of video diffusion models. This joint formulation enhances implicit scene comprehension and facilitates the creation of realistic lighting effects and intricate material interactions, such as shadows, reflections, and transparency. Trained on synthetic multi-illumination data and extensive automatically labeled real-world videos, our model demonstrates strong generalization across diverse domains and surpasses previous methods in both visual fidelity and temporal consistency.

View on arXiv
@article{he2025_2506.15673,
  title={ UniRelight: Learning Joint Decomposition and Synthesis for Video Relighting },
  author={ Kai He and Ruofan Liang and Jacob Munkberg and Jon Hasselgren and Nandita Vijaykumar and Alexander Keller and Sanja Fidler and Igor Gilitschenski and Zan Gojcic and Zian Wang },
  journal={arXiv preprint arXiv:2506.15673},
  year={ 2025 }
}
Comments on this paper