Controllability, temporal coherence, and detail synthesis remain the most critical challenges in video generation. In this paper, we focus on a commonly used yet underexplored cinematic technique known as Frame In and Frame Out. Specifically, starting from image-to-video generation, users can control the objects in the image to naturally leave the scene or provide breaking new identity references to enter the scene, guided by user-specified motion trajectory. To support this task, we introduce a new dataset curated semi-automatically, a comprehensive evaluation protocol targeting this setting, and an efficient identity-preserving motion-controllable video Diffusion Transformer architecture. Our evaluation shows that our proposed approach significantly outperforms existing baselines.
View on arXiv@article{wang2025_2505.21491, title={ Frame In-N-Out: Unbounded Controllable Image-to-Video Generation }, author={ Boyang Wang and Xuweiyi Chen and Matheus Gadelha and Zezhou Cheng }, journal={arXiv preprint arXiv:2505.21491}, year={ 2025 } }