ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14357
14
0

Vid2World: Crafting Video Diffusion Models to Interactive World Models

20 May 2025
Siqiao Huang
Jialong Wu
Qixing Zhou
Shangchen Miao
Mingsheng Long
    VGen
ArXivPDFHTML
Abstract

World models, which predict transitions based on history observation and action sequences, have shown great promise in improving data efficiency for sequential decision making. However, existing world models often require extensive domain-specific training and still produce low-fidelity, coarse predictions, limiting their applicability in complex environments. In contrast, video diffusion models trained on large, internet-scale datasets have demonstrated impressive capabilities in generating high-quality videos that capture diverse real-world dynamics. In this work, we present Vid2World, a general approach for leveraging and transferring pre-trained video diffusion models into interactive world models. To bridge the gap, Vid2World performs casualization of a pre-trained video diffusion model by crafting its architecture and training objective to enable autoregressive generation. Furthermore, it introduces a causal action guidance mechanism to enhance action controllability in the resulting interactive world model. Extensive experiments in robot manipulation and game simulation domains show that our method offers a scalable and effective approach for repurposing highly capable video diffusion models to interactive world models.

View on arXiv
@article{huang2025_2505.14357,
  title={ Vid2World: Crafting Video Diffusion Models to Interactive World Models },
  author={ Siqiao Huang and Jialong Wu and Qixing Zhou and Shangchen Miao and Mingsheng Long },
  journal={arXiv preprint arXiv:2505.14357},
  year={ 2025 }
}
Comments on this paper