ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.13919
26
2

LLM Agent Honeypot: Monitoring AI Hacking Agents in the Wild

17 October 2024
Reworr
Dmitrii Volkov
    LLMAG
ArXivPDFHTML
Abstract

Attacks powered by Large Language Model (LLM) agents represent a growing threat to modern cybersecurity. To address this concern, we present LLM Honeypot, a system designed to monitor autonomous AI hacking agents. By augmenting a standard SSH honeypot with prompt injection and time-based analysis techniques, our framework aims to distinguish LLM agents among all attackers. Over a trial deployment of about three months in a public environment, we collected 8,130,731 hacking attempts and 8 potential AI agents. Our work demonstrates the emergence of AI-driven threats and their current level of usage, serving as an early warning of malicious LLM agents in the wild.

View on arXiv
@article{reworr2025_2410.13919,
  title={ LLM Agent Honeypot: Monitoring AI Hacking Agents in the Wild },
  author={ Reworr and Dmitrii Volkov },
  journal={arXiv preprint arXiv:2410.13919},
  year={ 2025 }
}
Comments on this paper