2
0

ACSE-Eval: Can LLMs threat model real-world cloud infrastructure?

Abstract

While Large Language Models have shown promise in cybersecurity applications, their effectiveness in identifying security threats within cloud deployments remains unexplored. This paper introduces AWS Cloud Security Engineering Eval, a novel dataset for evaluating LLMs cloud security threat modeling capabilities. ACSE-Eval contains 100 production grade AWS deployment scenarios, each featuring detailed architectural specifications, Infrastructure as Code implementations, documented security vulnerabilities, and associated threat modeling parameters. Our dataset enables systemic assessment of LLMs abilities to identify security risks, analyze attack vectors, and propose mitigation strategies in cloud environments. Our evaluations on ACSE-Eval demonstrate that GPT 4.1 and Gemini 2.5 Pro excel at threat identification, with Gemini 2.5 Pro performing optimally in 0-shot scenarios and GPT 4.1 showing superior results in few-shot settings. While GPT 4.1 maintains a slight overall performance advantage, Claude 3.7 Sonnet generates the most semantically sophisticated threat models but struggles with threat categorization and generalization. To promote reproducibility and advance research in automated cybersecurity threat analysis, we open-source our dataset, evaluation metrics, and methodologies.

View on arXiv
@article{munshi2025_2505.11565,
  title={ ACSE-Eval: Can LLMs threat model real-world cloud infrastructure? },
  author={ Sarthak Munshi and Swapnil Pathak and Sonam Ghatode and Thenuga Priyadarshini and Dhivya Chandramouleeswaran and Ashutosh Rana },
  journal={arXiv preprint arXiv:2505.11565},
  year={ 2025 }
}
Comments on this paper