ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.01589
73
0

Text Speaks Louder than Vision: ASCII Art Reveals Textual Biases in Vision-Language Models

2 April 2025
Zhaochen Wang
Yujun Cai
Zi Huang
Bryan Hooi
Yiwei Wang
Ming Yang
    CoGe
    VLM
ArXivPDFHTML
Abstract

Vision-language models (VLMs) have advanced rapidly in processing multimodal information, but their ability to reconcile conflicting signals across modalities remains underexplored. This work investigates how VLMs process ASCII art, a unique medium where textual elements collectively form visual patterns, potentially creating semantic-visual conflicts. We introduce a novel evaluation framework that systematically challenges five state-of-the-art models (including GPT-4o, Claude, and Gemini) using adversarial ASCII art, where character-level semantics deliberately contradict global visual patterns. Our experiments reveal a strong text-priority bias: VLMs consistently prioritize textual information over visual patterns, with visual recognition ability declining dramatically as semantic complexity increases. Various mitigation attempts through visual parameter tuning and prompt engineering yielded only modest improvements, suggesting that this limitation requires architectural-level solutions. These findings uncover fundamental flaws in how current VLMs integrate multimodal information, providing important guidance for future model development while highlighting significant implications for content moderation systems vulnerable to adversarial examples.

View on arXiv
@article{wang2025_2504.01589,
  title={ Text Speaks Louder than Vision: ASCII Art Reveals Textual Biases in Vision-Language Models },
  author={ Zhaochen Wang and Bryan Hooi and Yiwei Wang and Ming-Hsuan Yang and Zi Huang and Yujun Cai },
  journal={arXiv preprint arXiv:2504.01589},
  year={ 2025 }
}
Comments on this paper