ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2308.01990
  4. Cited By
From Prompt Injections to SQL Injection Attacks: How Protected is Your LLM-Integrated Web Application?

From Prompt Injections to SQL Injection Attacks: How Protected is Your LLM-Integrated Web Application?

3 August 2023
Rodrigo Pedro
Daniel Castro
Paulo Carreira
Nuno Santos
    SILM
    AAML
ArXivPDFHTML

Papers citing "From Prompt Injections to SQL Injection Attacks: How Protected is Your LLM-Integrated Web Application?"

33 / 33 papers shown
Title
Web IP at Risk: Prevent Unauthorized Real-Time Retrieval by Large Language Models
Web IP at Risk: Prevent Unauthorized Real-Time Retrieval by Large Language Models
Yisheng Zhong
Yizhu Wen
Junfeng Guo
Mehran Kafai
Heng Huang
Hanqing Guo
Zhuangdi Zhu
7
0
0
19 May 2025
LM-Scout: Analyzing the Security of Language Model Integration in Android Apps
LM-Scout: Analyzing the Security of Language Model Integration in Android Apps
Muhammad Ibrahim
Gűliz Seray Tuncay
Z. Berkay Celik
Aravind Machiry
Antonio Bianchi
31
0
0
13 May 2025
AgentSpec: Customizable Runtime Enforcement for Safe and Reliable LLM Agents
AgentSpec: Customizable Runtime Enforcement for Safe and Reliable LLM Agents
Haoyu Wang
Christopher M. Poskitt
Jun Sun
42
0
0
24 Mar 2025
Towards Action Hijacking of Large Language Model-based Agent
Towards Action Hijacking of Large Language Model-based Agent
Yuyang Zhang
Kangjie Chen
Xudong Jiang
Yuxiang Sun
Run Wang
Lina Wang
LLMAG
AAML
73
2
0
14 Dec 2024
SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach
Ruoxi Sun
Jiamin Chang
Hammond Pearce
Chaowei Xiao
B. Li
Qi Wu
Surya Nepal
Minhui Xue
40
0
0
17 Nov 2024
InjecGuard: Benchmarking and Mitigating Over-defense in Prompt Injection Guardrail Models
InjecGuard: Benchmarking and Mitigating Over-defense in Prompt Injection Guardrail Models
Yiming Li
Xiaogeng Liu
SILM
42
5
0
30 Oct 2024
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Shanshan Han
87
1
0
09 Oct 2024
AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMs
AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMs
Xiaogeng Liu
Peiran Li
Edward Suh
Yevgeniy Vorobeychik
Zhuoqing Mao
Somesh Jha
Patrick McDaniel
Huan Sun
Bo Li
Chaowei Xiao
34
17
0
03 Oct 2024
System-Level Defense against Indirect Prompt Injection Attacks: An
  Information Flow Control Perspective
System-Level Defense against Indirect Prompt Injection Attacks: An Information Flow Control Perspective
Fangzhou Wu
Ethan Cecchetti
Chaowei Xiao
41
12
0
27 Sep 2024
Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in
  Red Teaming GenAI
Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI
Ambrish Rawat
Stefan Schoepf
Giulio Zizzo
Giandomenico Cornacchia
Muhammad Zaid Hameed
...
Elizabeth M. Daly
Mark Purcell
P. Sattigeri
Pin-Yu Chen
Kush R. Varshney
AAML
40
7
0
23 Sep 2024
PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs
PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt Injection in LLMs
Jiahao Yu
Yangguang Shao
Hanwen Miao
Junzheng Shi
SILM
AAML
74
4
0
23 Sep 2024
On the Feasibility of Fully AI-automated Vishing Attacks
On the Feasibility of Fully AI-automated Vishing Attacks
João Figueiredo
Afonso Carvalho
Daniel Castro
Daniel Gonçalves
Nuno Santos
27
2
0
20 Sep 2024
SafeEmbodAI: a Safety Framework for Mobile Robots in Embodied AI Systems
SafeEmbodAI: a Safety Framework for Mobile Robots in Embodied AI Systems
Wenxiao Zhang
Xiangrui Kong
Thomas Braunl
Jin B. Hong
39
2
0
03 Sep 2024
Threat Modelling and Risk Analysis for Large Language Model
  (LLM)-Powered Applications
Threat Modelling and Risk Analysis for Large Language Model (LLM)-Powered Applications
Stephen Burabari Tete
42
7
0
16 Jun 2024
Large Language Models as Software Components: A Taxonomy for
  LLM-Integrated Applications
Large Language Models as Software Components: A Taxonomy for LLM-Integrated Applications
Irene Weber
43
7
0
13 Jun 2024
BadAgent: Inserting and Activating Backdoor Attacks in LLM Agents
BadAgent: Inserting and Activating Backdoor Attacks in LLM Agents
Yifei Wang
Dizhan Xue
Shengjie Zhang
Shengsheng Qian
AAML
LLMAG
42
22
0
05 Jun 2024
AI Agents Under Threat: A Survey of Key Security Challenges and Future
  Pathways
AI Agents Under Threat: A Survey of Key Security Challenges and Future Pathways
Zehang Deng
Yongjian Guo
Changzhou Han
Wanlun Ma
Junwu Xiong
Sheng Wen
Yang Xiang
44
24
0
04 Jun 2024
AI Risk Management Should Incorporate Both Safety and Security
AI Risk Management Should Incorporate Both Safety and Security
Xiangyu Qi
Yangsibo Huang
Yi Zeng
Edoardo Debenedetti
Jonas Geiping
...
Chaowei Xiao
Bo-wen Li
Dawn Song
Peter Henderson
Prateek Mittal
AAML
51
11
0
29 May 2024
When LLMs Meet Cybersecurity: A Systematic Literature Review
When LLMs Meet Cybersecurity: A Systematic Literature Review
Jie Zhang
Haoyu Bu
Hui Wen
Yu Chen
Lun Li
Hongsong Zhu
45
36
0
06 May 2024
Attacks on Third-Party APIs of Large Language Models
Attacks on Third-Party APIs of Large Language Models
Wanru Zhao
Vidit Khazanchi
Haodi Xing
Xuanli He
Qiongkai Xu
Nicholas D. Lane
29
6
0
24 Apr 2024
Automatic and Universal Prompt Injection Attacks against Large Language
  Models
Automatic and Universal Prompt Injection Attacks against Large Language Models
Xiaogeng Liu
Zhiyuan Yu
Yizhe Zhang
Ning Zhang
Chaowei Xiao
SILM
AAML
46
33
0
07 Mar 2024
A New Era in LLM Security: Exploring Security Concerns in Real-World
  LLM-based Systems
A New Era in LLM Security: Exploring Security Concerns in Real-World LLM-based Systems
Fangzhou Wu
Ning Zhang
Somesh Jha
P. McDaniel
Chaowei Xiao
34
68
0
28 Feb 2024
Exploring Advanced Methodologies in Security Evaluation for LLMs
Exploring Advanced Methodologies in Security Evaluation for LLMs
Junming Huang
Jiawei Zhang
Qi Wang
Weihong Han
Yanchun Zhang
45
0
0
28 Feb 2024
SPML: A DSL for Defending Language Models Against Prompt Attacks
SPML: A DSL for Defending Language Models Against Prompt Attacks
Reshabh K Sharma
Vinayak Gupta
Dan Grossman
AAML
57
15
0
19 Feb 2024
PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented
  Generation of Large Language Models
PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models
Wei Zou
Runpeng Geng
Binghui Wang
Jinyuan Jia
SILM
39
18
1
12 Feb 2024
Security and Privacy Challenges of Large Language Models: A Survey
Security and Privacy Challenges of Large Language Models: A Survey
B. Das
M. H. Amini
Yanzhao Wu
PILM
ELM
19
103
0
30 Jan 2024
Fortifying Ethical Boundaries in AI: Advanced Strategies for Enhancing
  Security in Large Language Models
Fortifying Ethical Boundaries in AI: Advanced Strategies for Enhancing Security in Large Language Models
Yunhong He
Jianling Qiu
Wei Zhang
Zhe Yuan
32
3
0
27 Jan 2024
Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language
  Model Systems
Risk Taxonomy, Mitigation, and Assessment Benchmarks of Large Language Model Systems
Tianyu Cui
Yanling Wang
Chuanpu Fu
Yong Xiao
Sijia Li
...
Junwu Xiong
Xinyu Kong
Zujie Wen
Ke Xu
Qi Li
60
56
0
11 Jan 2024
The Philosopher's Stone: Trojaning Plugins of Large Language Models
The Philosopher's Stone: Trojaning Plugins of Large Language Models
Tian Dong
Minhui Xue
Guoxing Chen
Rayne Holland
Shaofeng Li
Yan Meng
Zhen Liu
Haojin Zhu
AAML
25
9
0
01 Dec 2023
Survey of Vulnerabilities in Large Language Models Revealed by
  Adversarial Attacks
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
Erfan Shayegani
Md Abdullah Al Mamun
Yu Fu
Pedram Zaree
Yue Dong
Nael B. Abu-Ghazaleh
AAML
147
148
0
16 Oct 2023
Demystifying RCE Vulnerabilities in LLM-Integrated Apps
Demystifying RCE Vulnerabilities in LLM-Integrated Apps
Tong Liu
Zizhuang Deng
Guozhu Meng
Yuekang Li
Kai Chen
SILM
44
19
0
06 Sep 2023
Image Hijacks: Adversarial Images can Control Generative Models at
  Runtime
Image Hijacks: Adversarial Images can Control Generative Models at Runtime
Luke Bailey
Euan Ong
Stuart J. Russell
Scott Emmons
VLM
MLLM
30
79
0
01 Sep 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
339
12,003
0
04 Mar 2022
1