ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.12053
2
0

VFRTok: Variable Frame Rates Video Tokenizer with Duration-Proportional Information Assumption

17 May 2025
Tianxiong Zhong
Xingye Tian
Boyuan Jiang
Xuebo Wang
Xin Tao
Pengfei Wan
Zhiwei Zhang
ArXivPDFHTML
Abstract

Modern video generation frameworks based on Latent Diffusion Models suffer from inefficiencies in tokenization due to the Frame-Proportional Information Assumption. Existing tokenizers provide fixed temporal compression rates, causing the computational cost of the diffusion model to scale linearly with the frame rate. The paper proposes the Duration-Proportional Information Assumption: the upper bound on the information capacity of a video is proportional to the duration rather than the number of frames. Based on this insight, the paper introduces VFRTok, a Transformer-based video tokenizer, that enables variable frame rate encoding and decoding through asymmetric frame rate training between the encoder and decoder. Furthermore, the paper proposes Partial Rotary Position Embeddings (RoPE) to decouple position and content modeling, which groups correlated patches into unified tokens. The Partial RoPE effectively improves content-awareness, enhancing the video generation capability. Benefiting from the compact and continuous spatio-temporal representation, VFRTok achieves competitive reconstruction quality and state-of-the-art generation fidelity while using only 1/8 tokens compared to existing tokenizers.

View on arXiv
@article{zhong2025_2505.12053,
  title={ VFRTok: Variable Frame Rates Video Tokenizer with Duration-Proportional Information Assumption },
  author={ Tianxiong Zhong and Xingye Tian and Boyuan Jiang and Xuebo Wang and Xin Tao and Pengfei Wan and Zhiwei Zhang },
  journal={arXiv preprint arXiv:2505.12053},
  year={ 2025 }
}
Comments on this paper