A Framework to Assess Multilingual Vulnerabilities of LLMs

Large Language Models (LLMs) are acquiring a wider range of capabilities, including understanding and responding in multiple languages. While they undergo safety training to prevent them from answering illegal questions, imbalances in training data and human evaluation resources can make these models more susceptible to attacks in low-resource languages (LRL). This paper proposes a framework to automatically assess the multilingual vulnerabilities of commonly used LLMs. Using our framework, we evaluated six LLMs across eight languages representing varying levels of resource availability. We validated the assessments generated by our automated framework through human evaluation in two languages, demonstrating that the framework's results align with human judgments in most cases. Our findings reveal vulnerabilities in LRL; however, these may pose minimal risk as they often stem from the model's poor performance, resulting in incoherent responses.
View on arXiv@article{tang2025_2503.13081, title={ A Framework to Assess Multilingual Vulnerabilities of LLMs }, author={ Likai Tang and Niruth Bogahawatta and Yasod Ginige and Jiarui Xu and Shixuan Sun and Surangika Ranathunga and Suranga Seneviratne }, journal={arXiv preprint arXiv:2503.13081}, year={ 2025 } }