ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.00215
52
2

Characterizing and Efficiently Accelerating Multimodal Generation Model Inference

30 September 2024
Yejin Lee
Anna Y. Sun
Basil Hosmer
Bilge Acun
Can Balioglu
Changhan Wang
Charles David Hernandez
Christian Puhrsch
Daniel Haziza
Driss Guessous
Francisco Massa
Jacob Kahn
Jeffrey Wan
Jeremy Reizenstein
Jiaqi Zhai
Joe Isaacson
Joel Schlosser
Juan Pino
Kaushik Ram Sadagopan
Leonid Shamis
Linjian Ma
Min-Jae Hwang
Mingda Chen
Mostafa Elhoushi
Pedro Rodriguez
Ram Pasunuru
Scott Yih
Sravya Popuri
Xing Liu
Carole-Jean Wu
ArXivPDFHTML
Abstract

Generative artificial intelligence (AI) technology is revolutionizing the computing industry. Not only its applications have broadened to various sectors but also poses new system design and optimization opportunities. The technology is capable of understanding and responding in multiple modalities. However, the advanced capability currently comes with significant system resource demands. To sustainably scale generative AI capabilities to billions of users in the world, inference must be fast and efficient. This paper pinpoints key system design and optimization opportunities by characterizing a family of emerging multi-modal generation models on real systems. Auto-regressive token generation is a critical latency performance bottleneck, typically dominated by GPU idle time. In addition to memory-intensive attention across the generative AI models, linear operations constitute significant inference latency due to the feed forward networks in Transformer-based models. We demonstrate that state-of-the-art optimization levers, spanning from applications to system software and hardware, set a 3.88x better baseline.

View on arXiv
@article{lee2025_2410.00215,
  title={ Characterizing and Efficiently Accelerating Multimodal Generation Model Inference },
  author={ Yejin Lee and Anna Sun and Basil Hosmer and Bilge Acun and Can Balioglu and Changhan Wang and Charles David Hernandez and Christian Puhrsch and Daniel Haziza and Driss Guessous and Francisco Massa and Jacob Kahn and Jeffrey Wan and Jeremy Reizenstein and Jiaqi Zhai and Joe Isaacson and Joel Schlosser and Juan Pino and Kaushik Ram Sadagopan and Leonid Shamis and Linjian Ma and Min-Jae Hwang and Mingda Chen and Mostafa Elhoushi and Pedro Rodriguez and Ram Pasunuru and Scott Yih and Sravya Popuri and Xing Liu and Carole-Jean Wu },
  journal={arXiv preprint arXiv:2410.00215},
  year={ 2025 }
}
Comments on this paper