36

TorchUMM: A Unified Multimodal Model Codebase for Evaluation, Analysis, and Post-training

Yinyi Luo
Wenwen Wang
Hayes Bai
Hongyu Zhu
Hao Chen
Pan He
Marios Savvides
Sharon Li
Jindong Wang
Main:14 Pages
5 Figures
Bibliography:5 Pages
14 Tables
Appendix:4 Pages
Abstract

Recent advances in unified multimodal models (UMMs) have led to a proliferation of architectures capable of understanding, generating, and editing across visual and textual modalities. However, developing a unified framework for UMMs remains challenging due to the diversity of model architectures and the heterogeneity of training paradigms and implementation details. In this paper, we present TorchUMM, the first unified codebase for comprehensive evaluation, analysis, and post-training across diverse UMM backbones, tasks, and datasets. TorchUMM supports a broad spectrum of models covering a wide range of scales and design paradigms. Our benchmark encompasses three core task dimensions: multimodal understanding, generation, and editing, and integrates both established and novel datasets to evaluate perception, reasoning, compositionality, and instruction-following abilities. By providing a unified interface and standardized evaluation protocols, TorchUMM enables fair and reproducible comparisons across heterogeneous models and fosters deeper insights into their strengths and limitations, facilitating the development of more capable unified multimodal systems. Code is available at:this https URL.

View on arXiv
Comments on this paper