ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.17132
54
0
v1v2 (latest)

Robustifying Vision-Language Models via Dynamic Token Reweighting

22 May 2025
Tanqiu Jiang
Jiacheng Liang
Rongyi Zhu
Jiawei Zhou
Fenglong Ma
Ting Wang
    AAML
ArXiv (abs)PDFHTML
Main:8 Pages
13 Figures
9 Tables
Appendix:11 Pages
Abstract

Large vision-language models (VLMs) are highly vulnerable to jailbreak attacks that exploit visual-textual interactions to bypass safety guardrails. In this paper, we present DTR, a novel inference-time defense that mitigates multimodal jailbreak attacks through optimizing the model's key-value (KV) caches. Rather than relying on curated safety-specific data or costly image-to-text conversion, we introduce a new formulation of the safety-relevant distributional shift induced by the visual modality. This formulation enables DTR to dynamically adjust visual token weights, minimizing the impact of adversarial visual inputs while preserving the model's general capabilities and inference efficiency. Extensive evaluation across diverse VLMs and attack benchmarks demonstrates that \sys outperforms existing defenses in both attack robustness and benign task performance, marking the first successful application of KV cache optimization for safety enhancement in multimodal foundation models. The code for replicating DTR is available: this https URL (warning: this paper contains potentially harmful content generated by VLMs.)

View on arXiv
@article{jiang2025_2505.17132,
  title={ Robustifying Vision-Language Models via Dynamic Token Reweighting },
  author={ Tanqiu Jiang and Jiacheng Liang and Rongyi Zhu and Jiawei Zhou and Fenglong Ma and Ting Wang },
  journal={arXiv preprint arXiv:2505.17132},
  year={ 2025 }
}
Comments on this paper