178
0

InfLVG: Reinforce Inference-Time Consistent Long Video Generation with GRPO

Abstract

Recent advances in text-to-video generation, particularly with autoregressive models, have enabled the synthesis of high-quality videos depicting individual scenes. However, extending these models to generate long, cross-scene videos remains a significant challenge. As the context length grows during autoregressive decoding, computational costs rise sharply, and the model's ability to maintain consistency and adhere to evolving textual prompts deteriorates. We introduce InfLVG, an inference-time framework that enables coherent long video generation without requiring additional long-form video data. InfLVG leverages a learnable context selection policy, optimized via Group Relative Policy Optimization (GRPO), to dynamically identify and retain the most semantically relevant context throughout the generation process. Instead of accumulating the entire generation history, the policy ranks and selects the top-KK most contextually relevant tokens, allowing the model to maintain a fixed computational budget while preserving content consistency and prompt alignment. To optimize the policy, we design a hybrid reward function that jointly captures semantic alignment, cross-scene consistency, and artifact reduction. To benchmark performance, we introduce the Cross-scene Video Benchmark (CsVBench) along with an Event Prompt Set (EPS) that simulates complex multi-scene transitions involving shared subjects and varied actions/backgrounds. Experimental results show that InfLVG can extend video length by up to 9×\times, achieving strong consistency and semantic fidelity across scenes. Our code is available atthis https URL.

View on arXiv
@article{fang2025_2505.17574,
  title={ InfLVG: Reinforce Inference-Time Consistent Long Video Generation with GRPO },
  author={ Xueji Fang and Liyuan Ma and Zhiyang Chen and Mingyuan Zhou and Guo-jun Qi },
  journal={arXiv preprint arXiv:2505.17574},
  year={ 2025 }
}
Comments on this paper