ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.15236
66
7

Jailbreaking and Mitigation of Vulnerabilities in Large Language Models

20 October 2024
Benji Peng
Ziqian Bi
Qian Niu
Ming Liu
Pohsun Feng
Tianyang Wang
T. Wang
Lawrence K. Q. Yan
Yizhu Wen
Y. Zhang
Caitlyn Heqi Yin
    AAML
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have transformed artificial intelligence by advancing natural language understanding and generation, enabling applications across fields beyond healthcare, software engineering, and conversational systems. Despite these advancements in the past few years, LLMs have shown considerable vulnerabilities, particularly to prompt injection and jailbreaking attacks. This review analyzes the state of research on these vulnerabilities and presents available defense strategies. We roughly categorize attack approaches into prompt-based, model-based, multimodal, and multilingual, covering techniques such as adversarial prompting, backdoor injections, and cross-modality exploits. We also review various defense mechanisms, including prompt filtering, transformation, alignment techniques, multi-agent defenses, and self-regulation, evaluating their strengths and shortcomings. We also discuss key metrics and benchmarks used to assess LLM safety and robustness, noting challenges like the quantification of attack success in interactive contexts and biases in existing datasets. Identifying current research gaps, we suggest future directions for resilient alignment strategies, advanced defenses against evolving attacks, automation of jailbreak detection, and consideration of ethical and societal impacts. This review emphasizes the need for continued research and cooperation within the AI community to enhance LLM security and ensure their safe deployment.

View on arXiv
@article{peng2025_2410.15236,
  title={ Jailbreaking and Mitigation of Vulnerabilities in Large Language Models },
  author={ Benji Peng and Keyu Chen and Qian Niu and Ziqian Bi and Ming Liu and Pohsun Feng and Tianyang Wang and Lawrence K.Q. Yan and Yizhu Wen and Yichao Zhang and Caitlyn Heqi Yin },
  journal={arXiv preprint arXiv:2410.15236},
  year={ 2025 }
}
Comments on this paper