ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.06577
29
0

Bypassing Safety Guardrails in LLMs Using Humor

9 April 2025
Pedro Cisneros-Velarde
ArXivPDFHTML
Abstract

In this paper, we show it is possible to bypass the safety guardrails of large language models (LLMs) through a humorous prompt including the unsafe request. In particular, our method does not edit the unsafe request and follows a fixed template -- it is simple to implement and does not need additional LLMs to craft prompts. Extensive experiments show the effectiveness of our method across different LLMs. We also show that both removing and adding more humor to our method can reduce its effectiveness -- excessive humor possibly distracts the LLM from fulfilling its unsafe request. Thus, we argue that LLM jailbreaking occurs when there is a proper balance between focus on the unsafe request and presence of humor.

View on arXiv
@article{cisneros-velarde2025_2504.06577,
  title={ Bypassing Safety Guardrails in LLMs Using Humor },
  author={ Pedro Cisneros-Velarde },
  journal={arXiv preprint arXiv:2504.06577},
  year={ 2025 }
}
Comments on this paper