Poster: Adapting Pretrained Vision Transformers with LoRA Against Attack Vectors
- AAML

Main:4 Pages
2 Figures
Bibliography:3 Pages
4 Tables
Appendix:4 Pages
Abstract
Image classifiers, such as those used for autonomous vehicle navigation, are largely known to be susceptible to adversarial attacks that target the input image set. There is extensive discussion on adversarial attacks including perturbations that alter the input images to cause malicious misclassifications without perceivable modification. This work proposes a countermeasure for such attacks by adjusting the weights and classes of pretrained vision transformers with a low-rank adaptation to become more robust against adversarial attacks and allow for scalable fine-tuning without retraining.
View on arXiv@article{blasingame2025_2506.00661, title={ LoRA as a Flexible Framework for Securing Large Vision Systems }, author={ Zander W. Blasingame and Richard E. Neddo and Chen Liu }, journal={arXiv preprint arXiv:2506.00661}, year={ 2025 } }
Comments on this paper