11
0

EchoShot: Multi-Shot Portrait Video Generation

Main:9 Pages
15 Figures
Bibliography:3 Pages
5 Tables
Appendix:12 Pages
Abstract

Video diffusion models substantially boost the productivity of artistic workflows with high-quality portrait video generative capacity. However, prevailing pipelines are primarily constrained to single-shot creation, while real-world applications urge for multiple shots with identity consistency and flexible content controllability. In this work, we propose EchoShot, a native and scalable multi-shot framework for portrait customization built upon a foundation video diffusion model. To start with, we propose shot-aware position embedding mechanisms within video diffusion transformer architecture to model inter-shot variations and establish intricate correspondence between multi-shot visual content and their textual descriptions. This simple yet effective design enables direct training on multi-shot video data without introducing additional computational overhead. To facilitate model training within multi-shot scenario, we construct PortraitGala, a large-scale and high-fidelity human-centric video dataset featuring cross-shot identity consistency and fine-grained captions such as facial attributes, outfits, and dynamic motions. To further enhance applicability, we extend EchoShot to perform reference image-based personalized multi-shot generation and long video synthesis with infinite shot counts. Extensive evaluations demonstrate that EchoShot achieves superior identity consistency as well as attribute-level controllability in multi-shot portrait video generation. Notably, the proposed framework demonstrates potential as a foundational paradigm for general multi-shot video modeling.

View on arXiv
@article{wang2025_2506.15838,
  title={ EchoShot: Multi-Shot Portrait Video Generation },
  author={ Jiahao Wang and Hualian Sheng and Sijia Cai and Weizhan Zhang and Caixia Yan and Yachuang Feng and Bing Deng and Jieping Ye },
  journal={arXiv preprint arXiv:2506.15838},
  year={ 2025 }
}
Comments on this paper