214
2

Video Diffusion Transformers are In-Context Learners

Abstract

This paper investigates a solution for enabling in-context capabilities of video diffusion transformers, with minimal tuning required for activation. Specifically, we propose a simple pipeline to leverage in-context generation: (i\textbf{i}) concatenate videos along spacial or time dimension, (ii\textbf{ii}) jointly caption multi-scene video clips from one source, and (iii\textbf{iii}) apply task-specific fine-tuning using carefully curated small datasets. Through a series of diverse controllable tasks, we demonstrate qualitatively that existing advanced text-to-video models can effectively perform in-context generation. Notably, it allows for the creation of consistent multi-scene videos exceeding 30 seconds in duration, without additional computational overhead. Importantly, this method requires no modifications to the original models, results in high-fidelity video outputs that better align with prompt specifications and maintain role consistency. Our framework presents a valuable tool for the research community and offers critical insights for advancing product-level controllable video generation systems. The data, code, and model weights are publicly available at:this https URL.

View on arXiv
@article{fei2025_2412.10783,
  title={ Video Diffusion Transformers are In-Context Learners },
  author={ Zhengcong Fei and Di Qiu and Debang Li and Changqian Yu and Mingyuan Fan },
  journal={arXiv preprint arXiv:2412.10783},
  year={ 2025 }
}
Comments on this paper