ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.14125
18
237

VideoPoet: A Large Language Model for Zero-Shot Video Generation

21 December 2023
Dan Kondratyuk
Lijun Yu
Xiuye Gu
José Lezama
Jonathan Huang
Grant Schindler
Rachel Hornung
Vighnesh Birodkar
Jimmy Yan
Ming-Chang Chiu
Krishna Somandepalli
Hassan Akbari
Y. Alon
Yong Cheng
Josh Dillon
Agrim Gupta
Meera Hahn
Anja Hauth
David Hendon
Alonso Martinez
David C. Minnen
Mikhail Sirotenko
Kihyuk Sohn
Xuan S. Yang
Hartwig Adam
Ming-Hsuan Yang
Irfan Essa
Huisheng Wang
David A. Ross
Bryan Seybold
Lu Jiang
    VGen
ArXivPDFHTML
Abstract

We present VideoPoet, a language model capable of synthesizing high-quality video, with matching audio, from a large variety of conditioning signals. VideoPoet employs a decoder-only transformer architecture that processes multimodal inputs -- including images, videos, text, and audio. The training protocol follows that of Large Language Models (LLMs), consisting of two stages: pretraining and task-specific adaptation. During pretraining, VideoPoet incorporates a mixture of multimodal generative objectives within an autoregressive Transformer framework. The pretrained LLM serves as a foundation that can be adapted for a range of video generation tasks. We present empirical results demonstrating the model's state-of-the-art capabilities in zero-shot video generation, specifically highlighting VideoPoet's ability to generate high-fidelity motions. Project page: http://sites.research.google/videopoet/

View on arXiv
Comments on this paper