ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.03603
25
0

Towards deployment-centric multimodal AI beyond vision and language

4 April 2025
Xianyuan Liu
Jiayang Zhang
Shuo Zhou
Thijs L. van der Plas
Avish Vijayaraghavan
Anastasiia Grishina
Mengdie Zhuang
Daniel Schofield
Christopher Tomlinson
Yixuan Wang
Ruizhe Li
Louisa van Zeeland
Sina Tabakhi
Cyndie Demeocq
Xiang Li
Arunav Das
Orlando Timmerman
Thomas Baldwin-McDonald
Jian Wu
Peizhen Bai
Zahraa Al Sahili
Omnia Alwazzan
Thao N. Do
M. N. I. Suvon
Angeline Wang
Lucia Cipolina-Kun
Luigi A. Moretti
Lucas Farndale
Nitisha Jain
Natalia Efremova
Yan Ge
Marta Varela
Hak-Keung Lam
Oya Celiktutan
Ben R. Evans
Alejandro Coca-Castro
Honghan Wu
Zahraa S. Abdallah
Chen Chen
Valentin Danchev
Nataliya Tkachenko
Lei Lu
T. Zhu
Gregory G. Slabaugh
Roger K. Moore
William K. Cheung
Peter H. Charlton
Haiping Lu
ArXivPDFHTML
Abstract

Multimodal artificial intelligence (AI) integrates diverse types of data via machine learning to improve understanding, prediction, and decision-making across disciplines such as healthcare, science, and engineering. However, most multimodal AI advances focus on models for vision and language data, while their deployability remains a key challenge. We advocate a deployment-centric workflow that incorporates deployment constraints early to reduce the likelihood of undeployable solutions, complementing data-centric and model-centric approaches. We also emphasise deeper integration across multiple levels of multimodality and multidisciplinary collaboration to significantly broaden the research scope beyond vision and language. To facilitate this approach, we identify common multimodal-AI-specific challenges shared across disciplines and examine three real-world use cases: pandemic response, self-driving car design, and climate change adaptation, drawing expertise from healthcare, social science, engineering, science, sustainability, and finance. By fostering multidisciplinary dialogue and open research practices, our community can accelerate deployment-centric development for broad societal impact.

View on arXiv
@article{liu2025_2504.03603,
  title={ Towards deployment-centric multimodal AI beyond vision and language },
  author={ Xianyuan Liu and Jiayang Zhang and Shuo Zhou and Thijs L. van der Plas and Avish Vijayaraghavan and Anastasiia Grishina and Mengdie Zhuang and Daniel Schofield and Christopher Tomlinson and Yuhan Wang and Ruizhe Li and Louisa van Zeeland and Sina Tabakhi and Cyndie Demeocq and Xiang Li and Arunav Das and Orlando Timmerman and Thomas Baldwin-McDonald and Jinge Wu and Peizhen Bai and Zahraa Al Sahili and Omnia Alwazzan and Thao N. Do and Mohammod N.I. Suvon and Angeline Wang and Lucia Cipolina-Kun and Luigi A. Moretti and Lucas Farndale and Nitisha Jain and Natalia Efremova and Yan Ge and Marta Varela and Hak-Keung Lam and Oya Celiktutan and Ben R. Evans and Alejandro Coca-Castro and Honghan Wu and Zahraa S. Abdallah and Chen Chen and Valentin Danchev and Nataliya Tkachenko and Lei Lu and Tingting Zhu and Gregory G. Slabaugh and Roger K. Moore and William K. Cheung and Peter H. Charlton and Haiping Lu },
  journal={arXiv preprint arXiv:2504.03603},
  year={ 2025 }
}
Comments on this paper