ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15645
7
0

Demystifying the Visual Quality Paradox in Multimodal Large Language Models

18 June 2025
Shuo Xing
Lanqing Guo
Hongyuan Hua
Seoyoung Lee
Peiran Li
Yufei Wang
Zhangyang Wang
Zhengzhong Tu
Author Contacts:
shuoxing@tamu.edutzz@tamu.edu
    VLM
ArXiv (abs)PDFHTML
Main:9 Pages
5 Figures
Bibliography:5 Pages
8 Tables
Appendix:5 Pages
Abstract

Recent Multimodal Large Language Models (MLLMs) excel on benchmark vision-language tasks, yet little is known about how input visual quality shapes their responses. Does higher perceptual quality of images already translate to better MLLM understanding? We conduct the first systematic study spanning leading MLLMs and a suite of vision-language benchmarks, applying controlled degradations and stylistic shifts to each image. Surprisingly, we uncover a visual-quality paradox: model, task, and even individual-instance performance can improve when images deviate from human-perceived fidelity. Off-the-shelf restoration pipelines fail to reconcile these idiosyncratic preferences. To close the gap, we introduce Visual-Quality Test-Time Tuning (VQ-TTT)-a lightweight adaptation module that: (1) inserts a learnable, low-rank kernel before the frozen vision encoder to modulate frequency content; and (2) fine-tunes only shallow vision-encoder layers via LoRA. VQ-TTT dynamically adjusts each input image in a single forward pass, aligning it with task-specific model preferences. Across the evaluated MLLMs and all datasets, VQ-TTT lifts significant average accuracy, with no external models, cached features, or extra training data. These findings redefine ``better'' visual inputs for MLLMs and highlight the need for adaptive, rather than universally ``clean'', imagery, in the new era of AI being the main data customer.

View on arXiv
@article{xing2025_2506.15645,
  title={ Demystifying the Visual Quality Paradox in Multimodal Large Language Models },
  author={ Shuo Xing and Lanqing Guo and Hongyuan Hua and Seoyoung Lee and Peiran Li and Yufei Wang and Zhangyang Wang and Zhengzhong Tu },
  journal={arXiv preprint arXiv:2506.15645},
  year={ 2025 }
}
Comments on this paper