ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.07283
  4. Cited By
Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems

Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems

9 October 2024
Donghyun Lee
Mo Tiwari
    LLMAG
ArXivPDFHTML

Papers citing "Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems"

3 / 3 papers shown
Title
Safeguard-by-Development: A Privacy-Enhanced Development Paradigm for Multi-Agent Collaboration Systems
Safeguard-by-Development: A Privacy-Enhanced Development Paradigm for Multi-Agent Collaboration Systems
Jian Cui
Zichuan Li
Luyi Xing
Xiaojing Liao
24
0
0
07 May 2025
Open Challenges in Multi-Agent Security: Towards Secure Systems of Interacting AI Agents
Open Challenges in Multi-Agent Security: Towards Secure Systems of Interacting AI Agents
Christian Schroeder de Witt
AAML
AI4CE
150
0
0
04 May 2025
Prompt Injection Attack to Tool Selection in LLM Agents
Prompt Injection Attack to Tool Selection in LLM Agents
Jiawen Shi
Zenghui Yuan
Guiyao Tie
Pan Zhou
Neil Zhenqiang Gong
Lichao Sun
LLMAG
51
0
0
28 Apr 2025
1