Modeling Beyond MOS: Quality Assessment Models Must Integrate Context, Reasoning, and Multimodality

This position paper argues that Mean Opinion Score (MOS), while historically foundational, is no longer sufficient as the sole supervisory signal for multimedia quality assessment models. MOS reduces rich, context-sensitive human judgments to a single scalar, obscuring semantic failures, user intent, and the rationale behind quality decisions. We contend that modern quality assessment models must integrate three interdependent capabilities: (1) context-awareness, to adapt evaluations to task-specific goals and viewing conditions; (2) reasoning, to produce interpretable, evidence-grounded justifications for quality judgments; and (3) multimodality, to align perceptual and semantic cues using vision-language models. We critique the limitations of current MOS-centric benchmarks and propose a roadmap for reform: richer datasets with contextual metadata and expert rationales, and new evaluation metrics that assess semantic alignment, reasoning fidelity, and contextual sensitivity. By reframing quality assessment as a contextual, explainable, and multimodal modeling task, we aim to catalyze a shift toward more robust, human-aligned, and trustworthy evaluation systems.
View on arXiv@article{kerkouri2025_2505.19696, title={ Modeling Beyond MOS: Quality Assessment Models Must Integrate Context, Reasoning, and Multimodality }, author={ Mohamed Amine Kerkouri and Marouane Tliba and Aladine Chetouani and Nour Aburaed and Alessandro Bruno }, journal={arXiv preprint arXiv:2505.19696}, year={ 2025 } }