ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00973
24
0

XGUARD: A Graded Benchmark for Evaluating Safety Failures of Large Language Models on Extremist Content

1 June 2025
Vadivel Abishethvarman
Bhavik Chandna
Pratik Jalan
Usman Naseem
    ELM
ArXiv (abs)PDFHTML
Main:3 Pages
6 Figures
Bibliography:2 Pages
3 Tables
Appendix:9 Pages
Abstract

Large Language Models (LLMs) can generate content spanning ideological rhetoric to explicit instructions for violence. However, existing safety evaluations often rely on simplistic binary labels (safe and unsafe), overlooking the nuanced spectrum of risk these outputs pose. To address this, we present XGUARD, a benchmark and evaluation framework designed to assess the severity of extremist content generated by LLMs. XGUARD includes 3,840 red teaming prompts sourced from real world data such as social media and news, covering a broad range of ideologically charged scenarios. Our framework categorizes model responses into five danger levels (0 to 4), enabling a more nuanced analysis of both the frequency and severity of failures. We introduce the interpretable Attack Severity Curve (ASC) to visualize vulnerabilities and compare defense mechanisms across threat intensities. Using XGUARD, we evaluate six popular LLMs and two lightweight defense strategies, revealing key insights into current safety gaps and trade-offs between robustness and expressive freedom. Our work underscores the value of graded safety metrics for building trustworthy LLMs.

View on arXiv
@article{abishethvarman2025_2506.00973,
  title={ XGUARD: A Graded Benchmark for Evaluating Safety Failures of Large Language Models on Extremist Content },
  author={ Vadivel Abishethvarman and Bhavik Chandna and Pratik Jalan and Usman Naseem },
  journal={arXiv preprint arXiv:2506.00973},
  year={ 2025 }
}
Comments on this paper