ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.21337
  4. Cited By
Fine-tuned Large Language Models (LLMs): Improved Prompt Injection
  Attacks Detection
v1v2 (latest)

Fine-tuned Large Language Models (LLMs): Improved Prompt Injection Attacks Detection

28 October 2024
M. Rahman
Fan Wu
A. Cuzzocrea
S. Ahamed
    AAML
ArXiv (abs)PDFHTML

Papers citing "Fine-tuned Large Language Models (LLMs): Improved Prompt Injection Attacks Detection"

2 / 2 papers shown
Title
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs
Chetan Pathade
AAMLSILM
230
2
0
07 May 2025
RTBAS: Defending LLM Agents Against Prompt Injection and Privacy Leakage
RTBAS: Defending LLM Agents Against Prompt Injection and Privacy Leakage
Peter Yong Zhong
Siyuan Chen
Ruiqi Wang
McKenna McCall
Ben L. Titzer
Heather Miller
Phillip B. Gibbons
LLMAG
183
8
0
17 Feb 2025
1