4
0

X-Fusion: Introducing New Modality to Frozen Large Language Models

Sicheng Mo
Thao Nguyen
Xun Huang
Siddharth Srinivasan Iyer
Yijun Li
Yuchen Liu
Abhishek Tandon
Eli Shechtman
Krishna Kumar Singh
Yong Jae Lee
Bolei Zhou
Yuheng Li
Abstract

We propose X-Fusion, a framework that extends pretrained Large Language Models (LLMs) for multimodal tasks while preserving their language capabilities. X-Fusion employs a dual-tower design with modality-specific weights, keeping the LLM's parameters frozen while integrating vision-specific information for both understanding and generation. Our experiments demonstrate that X-Fusion consistently outperforms alternative architectures on both image-to-text and text-to-image tasks. We find that incorporating understanding-focused data improves generation quality, reducing image data noise enhances overall performance, and feature alignment accelerates convergence for smaller models but has minimal impact on larger ones. Our findings provide valuable insights into building efficient unified multimodal models.

View on arXiv
@article{mo2025_2504.20996,
  title={ X-Fusion: Introducing New Modality to Frozen Large Language Models },
  author={ Sicheng Mo and Thao Nguyen and Xun Huang and Siddharth Srinivasan Iyer and Yijun Li and Yuchen Liu and Abhishek Tandon and Eli Shechtman and Krishna Kumar Singh and Yong Jae Lee and Bolei Zhou and Yuheng Li },
  journal={arXiv preprint arXiv:2504.20996},
  year={ 2025 }
}
Comments on this paper