ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.02436
66
2

SkyReels-A2: Compose Anything in Video Diffusion Transformers

3 April 2025
Zhengcong Fei
D. Li
Di Qiu
J. Wang
Yikun Dou
Rui Wang
J. Xu
Mingyuan Fan
Guibin Chen
Yang Li
Yahui Zhou
    DiffM
    VGen
ArXivPDFHTML
Abstract

This paper presents SkyReels-A2, a controllable video generation framework capable of assembling arbitrary visual elements (e.g., characters, objects, backgrounds) into synthesized videos based on textual prompts while maintaining strict consistency with reference images for each element. We term this task elements-to-video (E2V), whose primary challenges lie in preserving the fidelity of each reference element, ensuring coherent composition of the scene, and achieving natural outputs. To address these, we first design a comprehensive data pipeline to construct prompt-reference-video triplets for model training. Next, we propose a novel image-text joint embedding model to inject multi-element representations into the generative process, balancing element-specific consistency with global coherence and text alignment. We also optimize the inference pipeline for both speed and output stability. Moreover, we introduce a carefully curated benchmark for systematic evaluation, i.e, A2 Bench. Experiments demonstrate that our framework can generate diverse, high-quality videos with precise element control. SkyReels-A2 is the first open-source commercial grade model for the generation of E2V, performing favorably against advanced closed-source commercial models. We anticipate SkyReels-A2 will advance creative applications such as drama and virtual e-commerce, pushing the boundaries of controllable video generation.

View on arXiv
@article{fei2025_2504.02436,
  title={ SkyReels-A2: Compose Anything in Video Diffusion Transformers },
  author={ Zhengcong Fei and Debang Li and Di Qiu and Jiahua Wang and Yikun Dou and Rui Wang and Jingtao Xu and Mingyuan Fan and Guibin Chen and Yang Li and Yahui Zhou },
  journal={arXiv preprint arXiv:2504.02436},
  year={ 2025 }
}
Comments on this paper