ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.05499
  4. Cited By
Prompt Injection attack against LLM-integrated Applications

Prompt Injection attack against LLM-integrated Applications

8 June 2023
Yi Liu
Gelei Deng
Yuekang Li
Kailong Wang
Zihao Wang
Xiaofeng Wang
Tianwei Zhang
Yepang Liu
Haoyu Wang
Yanhong Zheng
Yang Liu
    SILM
ArXivPDFHTML

Papers citing "Prompt Injection attack against LLM-integrated Applications"

50 / 223 papers shown
Title
A Survey on Failure Analysis and Fault Injection in AI Systems
A Survey on Failure Analysis and Fault Injection in AI Systems
Guangba Yu
Gou Tan
Haojia Huang
Zhenyu Zhang
Pengfei Chen
Roberto Natella
Zibin Zheng
62
4
0
28 Jun 2024
A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics
A Survey on Privacy Attacks Against Digital Twin Systems in AI-Robotics
Ivan A. Fernandez
Subash Neupane
Trisha Chakraborty
Shaswata Mitra
Sudip Mittal
Nisha Pillai
Jingdao Chen
Shahram Rahimi
54
1
0
27 Jun 2024
Psychological Profiling in Cybersecurity: A Look at LLMs and
  Psycholinguistic Features
Psychological Profiling in Cybersecurity: A Look at LLMs and Psycholinguistic Features
Jean Marie Tshimula
D'Jeff K. Nkashama
Jean Tshibangu Muabila
René Manassé Galekwa
Hugues Kanda
...
Belkacem Chikhaoui
Shengrui Wang
Ali Mulenda Sumbu
Xavier Ndona
Raoul Kienge-Kienge Intudi
60
0
0
26 Jun 2024
Do LLMs dream of elephants (when told not to)? Latent concept
  association and associative memory in transformers
Do LLMs dream of elephants (when told not to)? Latent concept association and associative memory in transformers
Yibo Jiang
Goutham Rajendran
Pradeep Ravikumar
Bryon Aragam
CLL
KELM
47
6
0
26 Jun 2024
Adversarial Search Engine Optimization for Large Language Models
Adversarial Search Engine Optimization for Large Language Models
Fredrik Nestaas
Edoardo Debenedetti
Florian Tramèr
AAML
44
5
0
26 Jun 2024
AgentDojo: A Dynamic Environment to Evaluate Attacks and Defenses for
  LLM Agents
AgentDojo: A Dynamic Environment to Evaluate Attacks and Defenses for LLM Agents
Edoardo Debenedetti
Jie Zhang
Mislav Balunović
Luca Beurer-Kellner
Marc Fischer
Florian Tramèr
LLMAG
AAML
64
29
1
19 Jun 2024
Adversarial Attacks on Large Language Models in Medicine
Adversarial Attacks on Large Language Models in Medicine
Yifan Yang
Qiao Jin
Furong Huang
Zhiyong Lu
AAML
49
4
0
18 Jun 2024
Self and Cross-Model Distillation for LLMs: Effective Methods for
  Refusal Pattern Alignment
Self and Cross-Model Distillation for LLMs: Effective Methods for Refusal Pattern Alignment
Jie Li
Yi Liu
Chongyang Liu
Xiaoning Ren
Ling Shi
Weisong Sun
Yinxing Xue
37
0
0
17 Jun 2024
Threat Modelling and Risk Analysis for Large Language Model
  (LLM)-Powered Applications
Threat Modelling and Risk Analysis for Large Language Model (LLM)-Powered Applications
Stephen Burabari Tete
47
7
0
16 Jun 2024
TorchOpera: A Compound AI System for LLM Safety
TorchOpera: A Compound AI System for LLM Safety
Shanshan Han
Yuhang Yao
Zijian Hu
Dimitris Stripelis
Zhaozhuo Xu
Chaoyang He
LLMAG
49
0
0
16 Jun 2024
Large Language Models as Software Components: A Taxonomy for
  LLM-Integrated Applications
Large Language Models as Software Components: A Taxonomy for LLM-Integrated Applications
Irene Weber
53
7
0
13 Jun 2024
Unique Security and Privacy Threats of Large Language Model: A
  Comprehensive Survey
Unique Security and Privacy Threats of Large Language Model: A Comprehensive Survey
Shang Wang
Tianqing Zhu
Bo Liu
Ming Ding
Xu Guo
Dayong Ye
Wanlei Zhou
Philip S. Yu
PILM
74
17
0
12 Jun 2024
Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag
  Competition
Dataset and Lessons Learned from the 2024 SaTML LLM Capture-the-Flag Competition
Edoardo Debenedetti
Javier Rando
Daniel Paleka
Silaghi Fineas Florin
Dragos Albastroiu
...
Stefan Kraft
Mario Fritz
Florian Tramèr
Sahar Abdelnabi
Lea Schonherr
64
10
0
12 Jun 2024
Raccoon: Prompt Extraction Benchmark of LLM-Integrated Applications
Raccoon: Prompt Extraction Benchmark of LLM-Integrated Applications
Junlin Wang
Tianyi Yang
Roy Xie
Bhuwan Dhingra
SILM
AAML
41
4
0
10 Jun 2024
SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner
SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner
Xunguang Wang
Daoyuan Wu
Zhenlan Ji
Zongjie Li
Pingchuan Ma
Shuai Wang
Yingjiu Li
Yang Liu
Ning Liu
Juergen Rahmel
AAML
79
8
0
08 Jun 2024
More Victories, Less Cooperation: Assessing Cicero's Diplomacy Play
More Victories, Less Cooperation: Assessing Cicero's Diplomacy Play
Wichayaporn Wongkamjan
Feng Gu
Yanze Wang
Ulf Hermjakob
Jonathan May
Brandon M. Stewart
Jonathan K. Kummerfeld
Denis Peskoff
Jordan L. Boyd-Graber
55
3
0
07 Jun 2024
A Survey of Language-Based Communication in Robotics
A Survey of Language-Based Communication in Robotics
William Hunt
Sarvapali D. Ramchurn
Mohammad D. Soorati
LM&Ro
70
12
0
06 Jun 2024
Measure-Observe-Remeasure: An Interactive Paradigm for
  Differentially-Private Exploratory Analysis
Measure-Observe-Remeasure: An Interactive Paradigm for Differentially-Private Exploratory Analysis
Priyanka Nanayakkara
Hyeok Kim
Yifan Wu
Ali Sarvghad
Narges Mahyar
G. Miklau
Jessica Hullman
33
17
0
04 Jun 2024
AI Agents Under Threat: A Survey of Key Security Challenges and Future
  Pathways
AI Agents Under Threat: A Survey of Key Security Challenges and Future Pathways
Zehang Deng
Yongjian Guo
Changzhou Han
Wanlun Ma
Junwu Xiong
Sheng Wen
Yang Xiang
61
26
0
04 Jun 2024
Safeguarding Large Language Models: A Survey
Safeguarding Large Language Models: A Survey
Yi Dong
Ronghui Mu
Yanghao Zhang
Siqi Sun
Tianle Zhang
...
Yi Qi
Jinwei Hu
Jie Meng
Saddek Bensalem
Xiaowei Huang
OffRL
KELM
AILaw
52
19
0
03 Jun 2024
PrivacyRestore: Privacy-Preserving Inference in Large Language Models
  via Privacy Removal and Restoration
PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration
Ziqian Zeng
Jianwei Wang
Zhengdong Lu
Huiping Zhuang
Cen Chen
RALM
KELM
58
7
0
03 Jun 2024
Privacy in LLM-based Recommendation: Recent Advances and Future
  Directions
Privacy in LLM-based Recommendation: Recent Advances and Future Directions
Sichun Luo
Wei Shao
Yuxuan Yao
Jian Xu
Mingyang Liu
...
Maolin Wang
Guanzhi Deng
Hanxu Hou
Xinyi Zhang
Linqi Song
36
1
0
03 Jun 2024
BadRAG: Identifying Vulnerabilities in Retrieval Augmented Generation of
  Large Language Models
BadRAG: Identifying Vulnerabilities in Retrieval Augmented Generation of Large Language Models
Jiaqi Xue
Meng Zheng
Yebowen Hu
Fei Liu
Xun Chen
Qian Lou
AAML
SILM
38
27
0
03 Jun 2024
Exfiltration of personal information from ChatGPT via prompt injection
Exfiltration of personal information from ChatGPT via prompt injection
Gregory Schwartzman
SILM
31
1
0
31 May 2024
Enhancing Jailbreak Attack Against Large Language Models through Silent
  Tokens
Enhancing Jailbreak Attack Against Large Language Models through Silent Tokens
Jiahao Yu
Haozheng Luo
Jerry Yao-Chieh Hu
Wenbo Guo
Han Liu
Xinyu Xing
45
19
0
31 May 2024
AI Risk Management Should Incorporate Both Safety and Security
AI Risk Management Should Incorporate Both Safety and Security
Xiangyu Qi
Yangsibo Huang
Yi Zeng
Edoardo Debenedetti
Jonas Geiping
...
Chaowei Xiao
Yue Liu
Dawn Song
Peter Henderson
Prateek Mittal
AAML
56
11
0
29 May 2024
Semantic-guided Prompt Organization for Universal Goal Hijacking against
  LLMs
Semantic-guided Prompt Organization for Universal Goal Hijacking against LLMs
Yihao Huang
Chong Wang
Xiaojun Jia
Qing Guo
Felix Juefei Xu
Jian Zhang
G. Pu
Yang Liu
41
9
0
23 May 2024
Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation
Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation
Yuxi Li
Yi Liu
Yuekang Li
Ling Shi
Gelei Deng
Shengquan Chen
Kailong Wang
53
12
0
20 May 2024
Sociotechnical Implications of Generative Artificial Intelligence for
  Information Access
Sociotechnical Implications of Generative Artificial Intelligence for Information Access
Bhaskar Mitra
Henriette Cramer
Olya Gurevich
55
2
0
19 May 2024
Safeguarding Vision-Language Models Against Patched Visual Prompt
  Injectors
Safeguarding Vision-Language Models Against Patched Visual Prompt Injectors
Jiachen Sun
Changsheng Wang
Jiong Wang
Yiwei Zhang
Chaowei Xiao
AAML
VLM
44
3
0
17 May 2024
What is it for a Machine Learning Model to Have a Capability?
What is it for a Machine Learning Model to Have a Capability?
Jacqueline Harding
Nathaniel Sharadin
ELM
40
3
0
14 May 2024
PARDEN, Can You Repeat That? Defending against Jailbreaks via Repetition
PARDEN, Can You Repeat That? Defending against Jailbreaks via Repetition
Ziyang Zhang
Qizhen Zhang
Jakob N. Foerster
AAML
43
18
0
13 May 2024
Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation
Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation
Jiangnan Li
Yingyuan Yang
Jinyuan Stella Sun
62
4
0
10 May 2024
Large Language Models for Cyber Security: A Systematic Literature Review
Large Language Models for Cyber Security: A Systematic Literature Review
HanXiang Xu
Shenao Wang
Ningke Li
Kaidi Wang
Yanjie Zhao
Kai Chen
Ting Yu
Yang Liu
Haoyu Wang
44
29
0
08 May 2024
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered
  Applications
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications
Quan Zhang
Binqi Zeng
Chijin Zhou
Gwihwan Go
Heyuan Shi
Yu Jiang
SILM
AAML
42
21
0
26 Apr 2024
Attacks on Third-Party APIs of Large Language Models
Attacks on Third-Party APIs of Large Language Models
Wanru Zhao
Vidit Khazanchi
Haodi Xing
Xuanli He
Qiongkai Xu
Nicholas D. Lane
31
6
0
24 Apr 2024
The Instruction Hierarchy: Training LLMs to Prioritize Privileged
  Instructions
The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions
Eric Wallace
Kai Y. Xiao
R. Leike
Lilian Weng
Johannes Heidecke
Alex Beutel
SILM
58
121
0
19 Apr 2024
LLMs for Cyber Security: New Opportunities
LLMs for Cyber Security: New Opportunities
D. Divakaran
Sai Teja Peddinti
31
11
0
17 Apr 2024
Glitch Tokens in Large Language Models: Categorization Taxonomy and
  Effective Detection
Glitch Tokens in Large Language Models: Categorization Taxonomy and Effective Detection
Yuxi Li
Yi Liu
Gelei Deng
Ying Zhang
Wenjia Song
Ling Shi
Kailong Wang
Yuekang Li
Yang Liu
Haoyu Wang
52
21
0
15 Apr 2024
GoEX: Perspectives and Designs Towards a Runtime for Autonomous LLM
  Applications
GoEX: Perspectives and Designs Towards a Runtime for Autonomous LLM Applications
Shishir G. Patil
Tianjun Zhang
Vivian Fang
Noppapon C Roy Huang
Uc Berkeley
Aaron Hao
Martin Casado
Joseph E. Gonzalez Raluca
Ada Popa
Ion Stoica
ALM
34
10
0
10 Apr 2024
CodecLM: Aligning Language Models with Tailored Synthetic Data
CodecLM: Aligning Language Models with Tailored Synthetic Data
Zifeng Wang
Chun-Liang Li
Vincent Perot
Long T. Le
Jin Miao
Zizhao Zhang
Chen-Yu Lee
Tomas Pfister
SyDa
ALM
33
18
0
08 Apr 2024
Goal-guided Generative Prompt Injection Attack on Large Language Models
Goal-guided Generative Prompt Injection Attack on Large Language Models
Chong Zhang
Mingyu Jin
Qinkai Yu
Chengzhi Liu
Haochen Xue
Xiaobo Jin
AAML
SILM
52
12
0
06 Apr 2024
Octopus v2: On-device language model for super agent
Octopus v2: On-device language model for super agent
Wei Chen
Zhiyuan Li
RALM
40
24
0
02 Apr 2024
Can LLMs get help from other LLMs without revealing private information?
Can LLMs get help from other LLMs without revealing private information?
Florian Hartmann
D. Tran
Peter Kairouz
Victor Carbune
Blaise Agüera y Arcas
30
6
0
01 Apr 2024
Exploring the Privacy Protection Capabilities of Chinese Large Language
  Models
Exploring the Privacy Protection Capabilities of Chinese Large Language Models
Yuqi Yang
Xiaowen Huang
Jitao Sang
ELM
PILM
AILaw
55
1
0
27 Mar 2024
BadEdit: Backdooring large language models by model editing
BadEdit: Backdooring large language models by model editing
Yanzhou Li
Tianlin Li
Kangjie Chen
Jian Zhang
Shangqing Liu
Wenhan Wang
Tianwei Zhang
Yang Liu
SyDa
AAML
KELM
59
53
0
20 Mar 2024
Securing Large Language Models: Threats, Vulnerabilities and Responsible
  Practices
Securing Large Language Models: Threats, Vulnerabilities and Responsible Practices
Sara Abdali
Richard Anarfi
C. Barberan
Jia He
PILM
73
25
0
19 Mar 2024
Large language models in 6G security: challenges and opportunities
Large language models in 6G security: challenges and opportunities
Tri Nguyen
Huong Nguyen
Ahmad Ijaz
Saeid Sheikhi
Athanasios V. Vasilakos
Panos Kostakos
ELM
33
8
0
18 Mar 2024
Logits of API-Protected LLMs Leak Proprietary Information
Logits of API-Protected LLMs Leak Proprietary Information
Matthew Finlayson
Xiang Ren
Swabha Swayamdipta
PILM
39
23
0
14 Mar 2024
Strengthening Multimodal Large Language Model with Bootstrapped
  Preference Optimization
Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization
Renjie Pi
Tianyang Han
Wei Xiong
Jipeng Zhang
Runtao Liu
Rui Pan
Tong Zhang
MLLM
55
34
0
13 Mar 2024
Previous
12345
Next