MAGREF: Masked Guidance for Any-Reference Video Generation

Video generation has made substantial strides with the emergence of deep generative models, especially diffusion-based approaches. However, video generation based on multiple reference subjects still faces significant challenges in maintaining multi-subject consistency and ensuring high generation quality. In this paper, we propose MAGREF, a unified framework for any-reference video generation that introduces masked guidance to enable coherent multi-subject video synthesis conditioned on diverse reference images and a textual prompt. Specifically, we propose (1) a region-aware dynamic masking mechanism that enables a single model to flexibly handle various subject inference, including humans, objects, and backgrounds, without architectural changes, and (2) a pixel-wise channel concatenation mechanism that operates on the channel dimension to better preserve appearance features. Our model delivers state-of-the-art video generation quality, generalizing from single-subject training to complex multi-subject scenarios with coherent synthesis and precise control over individual subjects, outperforming existing open-source and commercial baselines. To facilitate evaluation, we also introduce a comprehensive multi-subject video benchmark. Extensive experiments demonstrate the effectiveness of our approach, paving the way for scalable, controllable, and high-fidelity multi-subject video synthesis. Code and model can be found at:this https URL
View on arXiv@article{deng2025_2505.23742, title={ MAGREF: Masked Guidance for Any-Reference Video Generation }, author={ Yufan Deng and Xun Guo and Yuanyang Yin and Jacob Zhiyuan Fang and Yiding Yang and Yizhi Wang and Shenghai Yuan and Angtian Wang and Bo Liu and Haibin Huang and Chongyang Ma }, journal={arXiv preprint arXiv:2505.23742}, year={ 2025 } }