ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.09527
  4. Cited By
Ignore Previous Prompt: Attack Techniques For Language Models

Ignore Previous Prompt: Attack Techniques For Language Models

17 November 2022
Fábio Perez
Ian Ribeiro
    SILM
ArXivPDFHTML

Papers citing "Ignore Previous Prompt: Attack Techniques For Language Models"

50 / 284 papers shown
Title
Jailbreaking? One Step Is Enough!
Jailbreaking? One Step Is Enough!
Weixiong Zheng
Peijian Zeng
Y. Li
Hongyan Wu
Nankai Lin
Jianfei Chen
Aimin Yang
Yue Zhou
AAML
81
0
0
17 Dec 2024
Towards Action Hijacking of Large Language Model-based Agent
Towards Action Hijacking of Large Language Model-based Agent
Yuyang Zhang
Kangjie Chen
Xudong Jiang
Yuxiang Sun
Run Wang
Lina Wang
LLMAG
AAML
73
2
0
14 Dec 2024
RAG-Thief: Scalable Extraction of Private Data from Retrieval-Augmented
  Generation Applications with Agent-based Attacks
RAG-Thief: Scalable Extraction of Private Data from Retrieval-Augmented Generation Applications with Agent-based Attacks
Changyue Jiang
Xudong Pan
Geng Hong
Chenfu Bao
Min Yang
SILM
75
9
0
21 Nov 2024
SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach
Ruoxi Sun
Jiamin Chang
Hammond Pearce
Chaowei Xiao
B. Li
Qi Wu
Surya Nepal
Minhui Xue
40
0
0
17 Nov 2024
New Emerged Security and Privacy of Pre-trained Model: a Survey and
  Outlook
New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook
Meng Yang
Tianqing Zhu
Chi Liu
Wanlei Zhou
Shui Yu
Philip S. Yu
AAML
ELM
PILM
61
1
0
12 Nov 2024
Attention Tracker: Detecting Prompt Injection Attacks in LLMs
Attention Tracker: Detecting Prompt Injection Attacks in LLMs
Kuo-Han Hung
Ching-Yun Ko
Ambrish Rawat
I-Hsin Chung
Winston H. Hsu
Pin-Yu Chen
49
7
0
01 Nov 2024
Defense Against Prompt Injection Attack by Leveraging Attack Techniques
Defense Against Prompt Injection Attack by Leveraging Attack Techniques
Yulin Chen
Haoran Li
Zihao Zheng
Yangqiu Song
Dekai Wu
Bryan Hooi
SILM
AAML
50
4
0
01 Nov 2024
HijackRAG: Hijacking Attacks against Retrieval-Augmented Large Language
  Models
HijackRAG: Hijacking Attacks against Retrieval-Augmented Large Language Models
Yucheng Zhang
Qinfeng Li
Tianyu Du
Xuhong Zhang
Xinkui Zhao
Zhengwen Feng
Jianwei Yin
AAML
SILM
50
5
0
30 Oct 2024
InjecGuard: Benchmarking and Mitigating Over-defense in Prompt Injection Guardrail Models
InjecGuard: Benchmarking and Mitigating Over-defense in Prompt Injection Guardrail Models
Yiming Li
Xiaogeng Liu
SILM
42
5
0
30 Oct 2024
CFSafety: Comprehensive Fine-grained Safety Assessment for LLMs
CFSafety: Comprehensive Fine-grained Safety Assessment for LLMs
Zhihao Liu
Chenhui Hu
ALM
ELM
33
1
0
29 Oct 2024
FATH: Authentication-based Test-time Defense against Indirect Prompt
  Injection Attacks
FATH: Authentication-based Test-time Defense against Indirect Prompt Injection Attacks
Jiongxiao Wang
Fangzhou Wu
Wendi Li
Jinsheng Pan
Edward Suh
Zhuoqing Mao
Muhao Chen
Chaowei Xiao
AAML
40
6
0
28 Oct 2024
Hacking Back the AI-Hacker: Prompt Injection as a Defense Against
  LLM-driven Cyberattacks
Hacking Back the AI-Hacker: Prompt Injection as a Defense Against LLM-driven Cyberattacks
Dario Pasquini
Evgenios M. Kornaropoulos
G. Ateniese
AAML
22
3
0
28 Oct 2024
Fine-tuned Large Language Models (LLMs): Improved Prompt Injection
  Attacks Detection
Fine-tuned Large Language Models (LLMs): Improved Prompt Injection Attacks Detection
M. Rahman
Fan Wu
A. Cuzzocrea
S. Ahamed
AAML
25
3
0
28 Oct 2024
Vulnerability of LLMs to Vertically Aligned Text Manipulations
Vulnerability of LLMs to Vertically Aligned Text Manipulations
Zhecheng Li
Y. Wang
Bryan Hooi
Yujun Cai
Zhen Xiong
Nanyun Peng
Kai-Wei Chang
53
1
0
26 Oct 2024
Adversarial Attacks on Large Language Models Using Regularized
  Relaxation
Adversarial Attacks on Large Language Models Using Regularized Relaxation
Samuel Jacob Chacko
Sajib Biswas
Chashi Mahiul Islam
Fatema Tabassum Liza
Xiuwen Liu
AAML
31
2
0
24 Oct 2024
IPL: Leveraging Multimodal Large Language Models for Intelligent Product
  Listing
IPL: Leveraging Multimodal Large Language Models for Intelligent Product Listing
Kang Chen
Qingheng Zhang
Chengbao Lian
Yixin Ji
Xuwei Liu
Shuguang Han
Guoqiang Wu
Fei Huang
Jufeng Chen
31
1
0
22 Oct 2024
Breaking ReAct Agents: Foot-in-the-Door Attack Will Get You In
Breaking ReAct Agents: Foot-in-the-Door Attack Will Get You In
Itay Nakash
George Kour
Guy Uziel
Ateret Anaby-Tavor
AAML
LLMAG
40
4
0
22 Oct 2024
Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A
  Comparative Analysis
Insights and Current Gaps in Open-Source LLM Vulnerability Scanners: A Comparative Analysis
Jonathan Brokman
Omer Hofman
Oren Rachmil
Inderjeet Singh
Vikas Pahuja
Rathina Sabapathy Aishvariya Priya
Amit Giloni
Roman Vainshtein
Hisashi Kojima
36
2
0
21 Oct 2024
SMILES-Prompting: A Novel Approach to LLM Jailbreak Attacks in Chemical
  Synthesis
SMILES-Prompting: A Novel Approach to LLM Jailbreak Attacks in Chemical Synthesis
Aidan Wong
He Cao
Zijing Liu
Yu Li
44
2
0
21 Oct 2024
SoK: Prompt Hacking of Large Language Models
SoK: Prompt Hacking of Large Language Models
Baha Rababah
Shang
Wu
Matthew Kwiatkowski
Carson Leung
Cuneyt Gurcan Akcora
AAML
43
2
0
16 Oct 2024
Cognitive Overload Attack:Prompt Injection for Long Context
Cognitive Overload Attack:Prompt Injection for Long Context
Bibek Upadhayay
Vahid Behzadan
Amin Karbasi
AAML
34
2
0
15 Oct 2024
Can LLMs be Scammed? A Baseline Measurement Study
Can LLMs be Scammed? A Baseline Measurement Study
Udari Madhushani Sehwag
Kelly Patel
Francesca Mosca
Vineeth Ravi
Jessica Staddon
23
0
0
14 Oct 2024
Are You Human? An Adversarial Benchmark to Expose LLMs
Are You Human? An Adversarial Benchmark to Expose LLMs
Gilad Gressel
Rahul Pankajakshan
Yisroel Mirsky
DeLMO
38
0
0
12 Oct 2024
Mind Your Questions! Towards Backdoor Attacks on Text-to-Visualization
  Models
Mind Your Questions! Towards Backdoor Attacks on Text-to-Visualization Models
Shuaimin Li
Yuanfeng Song
Xuanang Chen
Anni Peng
Zhuoyue Wan
Chen Jason Zhang
Raymond Chi-Wing Wong
SILM
31
0
0
09 Oct 2024
Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems
Prompt Infection: LLM-to-LLM Prompt Injection within Multi-Agent Systems
Donghyun Lee
Mo Tiwari
LLMAG
39
9
0
09 Oct 2024
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Shanshan Han
87
1
0
09 Oct 2024
Instructional Segment Embedding: Improving LLM Safety with Instruction Hierarchy
Instructional Segment Embedding: Improving LLM Safety with Instruction Hierarchy
Tong Wu
Shujian Zhang
Kaiqiang Song
Silei Xu
Sanqiang Zhao
Ravi Agrawal
Sathish Indurthi
Chong Xiang
Prateek Mittal
Wenxuan Zhou
45
8
0
09 Oct 2024
Non-Halting Queries: Exploiting Fixed Points in LLMs
Non-Halting Queries: Exploiting Fixed Points in LLMs
Ghaith Hammouri
Kemal Derya
B. Sunar
33
0
0
08 Oct 2024
From Transparency to Accountability and Back: A Discussion of Access and
  Evidence in AI Auditing
From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing
Sarah H. Cen
Rohan Alur
31
1
0
07 Oct 2024
Toxic Subword Pruning for Dialogue Response Generation on Large Language
  Models
Toxic Subword Pruning for Dialogue Response Generation on Large Language Models
Hongyuan Lu
Wai Lam
17
0
0
05 Oct 2024
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents
Agent Security Bench (ASB): Formalizing and Benchmarking Attacks and Defenses in LLM-based Agents
Hanrong Zhang
Jingyuan Huang
Kai Mei
Yifei Yao
Zhenting Wang
Chenlu Zhan
Hongwei Wang
Yongfeng Zhang
AAML
LLMAG
ELM
51
22
0
03 Oct 2024
Endless Jailbreaks with Bijection Learning
Endless Jailbreaks with Bijection Learning
Brian R. Y. Huang
Maximilian Li
Leonard Tang
AAML
81
5
0
02 Oct 2024
The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems
The Early Bird Catches the Leak: Unveiling Timing Side Channels in LLM Serving Systems
Linke Song
Zixuan Pang
Wenhao Wang
Zihao Wang
XiaoFeng Wang
Hongbo Chen
Wei Song
Yier Jin
Dan Meng
Rui Hou
56
7
0
30 Sep 2024
GenTel-Safe: A Unified Benchmark and Shielding Framework for Defending
  Against Prompt Injection Attacks
GenTel-Safe: A Unified Benchmark and Shielding Framework for Defending Against Prompt Injection Attacks
Rongchang Li
Minjie Chen
Chang Hu
Han Chen
Wenpeng Xing
Meng Han
SILM
ELM
39
1
0
29 Sep 2024
Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in
  Red Teaming GenAI
Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI
Ambrish Rawat
Stefan Schoepf
Giulio Zizzo
Giandomenico Cornacchia
Muhammad Zaid Hameed
...
Elizabeth M. Daly
Mark Purcell
P. Sattigeri
Pin-Yu Chen
Kush R. Varshney
AAML
40
7
0
23 Sep 2024
PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs
PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs
Jiahao Yu
Yangguang Shao
Hanwen Miao
Junzheng Shi
SILM
AAML
71
4
0
23 Sep 2024
Applying Pre-trained Multilingual BERT in Embeddings for Improved
  Malicious Prompt Injection Attacks Detection
Applying Pre-trained Multilingual BERT in Embeddings for Improved Malicious Prompt Injection Attacks Detection
M. Rahman
Hossain Shahriar
Fan Wu
A. Cuzzocrea
AAML
36
4
0
20 Sep 2024
CoCA: Regaining Safety-awareness of Multimodal Large Language Models
  with Constitutional Calibration
CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration
Jiahui Gao
Renjie Pi
Tianyang Han
Han Wu
Lanqing Hong
Lingpeng Kong
Xin Jiang
Zhenguo Li
41
5
0
17 Sep 2024
Causal Inference with Large Language Model: A Survey
Causal Inference with Large Language Model: A Survey
Jing Ma
CML
LRM
100
8
0
15 Sep 2024
Securing Vision-Language Models with a Robust Encoder Against Jailbreak
  and Adversarial Attacks
Securing Vision-Language Models with a Robust Encoder Against Jailbreak and Adversarial Attacks
Md Zarif Hossain
Ahmed Imteaj
AAML
VLM
46
3
0
11 Sep 2024
Recent Advances in Attack and Defense Approaches of Large Language
  Models
Recent Advances in Attack and Defense Approaches of Large Language Models
Jing Cui
Yishi Xu
Zhewei Huang
Shuchang Zhou
Jianbin Jiao
Junge Zhang
PILM
AAML
57
1
0
05 Sep 2024
SafeEmbodAI: a Safety Framework for Mobile Robots in Embodied AI Systems
SafeEmbodAI: a Safety Framework for Mobile Robots in Embodied AI Systems
Wenxiao Zhang
Xiangrui Kong
Thomas Braunl
Jin B. Hong
39
2
0
03 Sep 2024
ContextCite: Attributing Model Generation to Context
ContextCite: Attributing Model Generation to Context
Benjamin Cohen-Wang
Harshay Shah
Kristian Georgiev
Aleksander Madry
LRM
33
18
0
01 Sep 2024
LLM-PBE: Assessing Data Privacy in Large Language Models
LLM-PBE: Assessing Data Privacy in Large Language Models
Qinbin Li
Junyuan Hong
Chulin Xie
Jeffrey Tan
Rachel Xin
...
Dan Hendrycks
Zhangyang Wang
Bo Li
Bingsheng He
Dawn Song
ELM
PILM
40
13
0
23 Aug 2024
Enhance Modality Robustness in Text-Centric Multimodal Alignment with
  Adversarial Prompting
Enhance Modality Robustness in Text-Centric Multimodal Alignment with Adversarial Prompting
Yun-Da Tsai
Ting-Yu Yen
Keng-Te Liao
Shou-De Lin
37
1
0
19 Aug 2024
BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger
BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger
Yulin Chen
Haoran Li
Zihao Zheng
Zihao Zheng
Yangqiu Song
Bryan Hooi
50
6
0
17 Aug 2024
A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered
  Applications are Vulnerable to PromptWares
A Jailbroken GenAI Model Can Cause Substantial Harm: GenAI-powered Applications are Vulnerable to PromptWares
Stav Cohen
Ron Bitton
Ben Nassi
SILM
38
5
0
09 Aug 2024
Multi-Turn Context Jailbreak Attack on Large Language Models From First
  Principles
Multi-Turn Context Jailbreak Attack on Large Language Models From First Principles
Xiongtao Sun
Deyue Zhang
Dongdong Yang
Quanchen Zou
Hui Li
AAML
34
11
0
08 Aug 2024
FDI: Attack Neural Code Generation Systems through User Feedback Channel
FDI: Attack Neural Code Generation Systems through User Feedback Channel
Zhensu Sun
Xiaoning Du
Xiapu Luo
Fu Song
David Lo
Li Li
AAML
33
3
0
08 Aug 2024
Empirical Analysis of Large Vision-Language Models against Goal
  Hijacking via Visual Prompt Injection
Empirical Analysis of Large Vision-Language Models against Goal Hijacking via Visual Prompt Injection
Subaru Kimura
Ryota Tanaka
Shumpei Miyawaki
Jun Suzuki
Keisuke Sakaguchi
MLLM
30
4
0
07 Aug 2024
Previous
123456
Next