Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.17196
Cited By
Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications
26 April 2024
Quan Zhang
Binqi Zeng
Chijin Zhou
Gwihwan Go
Heyuan Shi
Yu Jiang
SILM
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Human-Imperceptible Retrieval Poisoning Attacks in LLM-Powered Applications"
8 / 8 papers shown
Title
POISONCRAFT: Practical Poisoning of Retrieval-Augmented Generation for Large Language Models
Yangguang Shao
Xinjie Lin
Haozheng Luo
Chengshang Hou
G. Xiong
Jiahao Yu
Junzheng Shi
SILM
69
0
0
10 May 2025
PR-Attack: Coordinated Prompt-RAG Attacks on Retrieval-Augmented Generation in Large Language Models via Bilevel Optimization
Yang Jiao
Xiao Wang
Kai Yang
AAML
SILM
62
0
0
10 Apr 2025
Poisoned-MRAG: Knowledge Poisoning Attacks to Multimodal Retrieval Augmented Generation
Yinuo Liu
Zenghui Yuan
Guiyao Tie
Jiawen Shi
Lichao Sun
Lichao Sun
Neil Zhenqiang Gong
61
1
0
08 Mar 2025
Towards Advancing Code Generation with Large Language Models: A Research Roadmap
Haolin Jin
Huaming Chen
Qinghua Lu
Liming Zhu
LLMAG
70
1
0
20 Jan 2025
Universal and Transferable Adversarial Attacks on Aligned Language Models
Andy Zou
Zifan Wang
Nicholas Carlini
Milad Nasr
J. Zico Kolter
Matt Fredrikson
141
1,376
0
27 Jul 2023
Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
Kai Greshake
Sahar Abdelnabi
Shailesh Mishra
C. Endres
Thorsten Holz
Mario Fritz
SILM
90
462
0
23 Feb 2023
MPNet: Masked and Permuted Pre-training for Language Understanding
Kaitao Song
Xu Tan
Tao Qin
Jianfeng Lu
Tie-Yan Liu
76
1,093
0
20 Apr 2020
Neural Machine Translation: A Review and Survey
Felix Stahlberg
3DV
AI4TS
MedIm
47
317
0
04 Dec 2019
1