Attacking Attention of Foundation Models Disrupts Downstream Tasks
- AAML

Foundation models represent the most prominent and recent paradigm shift in artificial this http URL models are large models, trained on broad data that deliver high accuracy in many downstream tasks, often without fine-tuning. For this reason, models such as CLIP , DINO or Vision Transfomers (ViT), are becoming the bedrock of many industrial AI-powered applications. However, the reliance on pre-trained foundation models also introduces significant security concerns, as these models are vulnerable to adversarial attacks. Such attacks involve deliberately crafted inputs designed to deceive AI systems, jeopardizing their this http URL paper studies the vulnerabilities of vision foundation models, focusing specifically on CLIP and ViTs, and explores the transferability of adversarial attacks to downstream tasks. We introduce a novel attack, targeting the structure of transformer-based architectures in a task-agnostic this http URL demonstrate the effectiveness of our attack on several downstream tasks: classification, captioning, image/text retrieval, segmentation and depth estimation.
View on arXiv@article{silva2025_2506.05394, title={ Attacking Attention of Foundation Models Disrupts Downstream Tasks }, author={ Hondamunige Prasanna Silva and Federico Becattini and Lorenzo Seidenari }, journal={arXiv preprint arXiv:2506.05394}, year={ 2025 } }