ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15253
5
0

RAS-Eval: A Comprehensive Benchmark for Security Evaluation of LLM Agents in Real-World Environments

18 June 2025
Yuchuan Fu
Xiaohan Yuan
Dongxia Wang
    LLMAGELM
ArXiv (abs)PDFHTML
Main:10 Pages
9 Figures
Bibliography:2 Pages
11 Tables
Abstract

The rapid deployment of Large language model (LLM) agents in critical domains like healthcare and finance necessitates robust security frameworks. To address the absence of standardized evaluation benchmarks for these agents in dynamic environments, we introduce RAS-Eval, a comprehensive security benchmark supporting both simulated and real-world tool execution. RAS-Eval comprises 80 test cases and 3,802 attack tasks mapped to 11 Common Weakness Enumeration (CWE) categories, with tools implemented in JSON, LangGraph, and Model Context Protocol (MCP) formats. We evaluate 6 state-of-the-art LLMs across diverse scenarios, revealing significant vulnerabilities: attacks reduced agent task completion rates (TCR) by 36.78% on average and achieved an 85.65% success rate in academic settings. Notably, scaling laws held for security capabilities, with larger models outperforming smaller counterparts. Our findings expose critical risks in real-world agent deployments and provide a foundational framework for future security research. Code and data are available atthis https URL.

View on arXiv
@article{fu2025_2506.15253,
  title={ RAS-Eval: A Comprehensive Benchmark for Security Evaluation of LLM Agents in Real-World Environments },
  author={ Yuchuan Fu and Xiaohan Yuan and Dongxia Wang },
  journal={arXiv preprint arXiv:2506.15253},
  year={ 2025 }
}
Comments on this paper