ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2512.14098
92
0
v1v2 (latest)

Cornserve: Efficiently Serving Any-to-Any Multimodal Models

16 December 2025
Jeff J. Ma
Jae-Won Chung
Jisang Ahn
Yizhuo Liang
Akshay Jajoo
Myungjin Lee
Mosharaf Chowdhury
    VLM
ArXiv (abs)PDFHTMLGithub (81★)
Main:12 Pages
13 Figures
Bibliography:3 Pages
5 Tables
Appendix:3 Pages
Abstract

We present Cornserve, an efficient online serving system for an emerging class of multimodal models called Any-to-Any models. Any-to-Any models accept combinations of text and multimodal data (e.g., image, video, audio) as input and also generate combinations of text and multimodal data as output, introducing request type, computation path, and computation scaling heterogeneity in model serving.Cornserve allows model developers to describe the computation graph of generic Any-to-Any models, which consists of heterogeneous components such as multimodal encoders, autoregressive models like Large Language Models (LLMs), and multimodal generators like Diffusion Transformers (DiTs). Given this, Cornserve's planner automatically finds an optimized deployment plan for the model, including whether and how to disaggregate the model into smaller components based on model and workload characteristics. Cornserve's distributed runtime then executes the model per the plan, efficiently handling Any-to-Any model heterogeneity during online serving. Evaluations show that Cornserve can efficiently serve diverse Any-to-Any models and workloads, delivering up to 3.81×\times× throughput improvement and up to 5.79×\times× tail latency reduction over existing solutions.

View on arXiv
Comments on this paper