ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.00596
97
9

PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation

30 November 2024
Qiyao Xue
Xiangyu Yin
Boyuan Yang
Wei Gao
    DiffM
    VGen
ArXivPDFHTML
Abstract

Text-to-video (T2V) generation has been recently enabled by transformer-based diffusion models, but current T2V models lack capabilities in adhering to the real-world common knowledge and physical rules, due to their limited understanding of physical realism and deficiency in temporal modeling. Existing solutions are either data-driven or require extra model inputs, but cannot be generalizable to out-of-distribution domains. In this paper, we present PhyT2V, a new data-independent T2V technique that expands the current T2V model's capability of video generation to out-of-distribution domains, by enabling chain-of-thought and step-back reasoning in T2V prompting. Our experiments show that PhyT2V improves existing T2V models' adherence to real-world physical rules by 2.3x, and achieves 35% improvement compared to T2V prompt enhancers. The source codes are available at:this https URL.

View on arXiv
@article{xue2025_2412.00596,
  title={ PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation },
  author={ Qiyao Xue and Xiangyu Yin and Boyuan Yang and Wei Gao },
  journal={arXiv preprint arXiv:2412.00596},
  year={ 2025 }
}
Comments on this paper