Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2402.08416
Cited By
Pandora: Jailbreak GPTs by Retrieval Augmented Generation Poisoning
13 February 2024
Gelei Deng
Yi Liu
Kailong Wang
Yuekang Li
Tianwei Zhang
Yang Liu
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Pandora: Jailbreak GPTs by Retrieval Augmented Generation Poisoning"
6 / 6 papers shown
Title
Securing RAG: A Risk Assessment and Mitigation Framework
Lukas Ammann
Sara Ott
Christoph R. Landolt
Marco P. Lehmann
SILM
33
0
0
13 May 2025
Privacy-Preserving Federated Embedding Learning for Localized Retrieval-Augmented Generation
Qianren Mao
Qili Zhang
Hanwen Hao
Zhentao Han
Runhua Xu
...
Jing Chen
Yangqiu Song
Jin Dong
Jianxin Li
Philip S. Yu
71
1
0
27 Apr 2025
RAG LLMs are Not Safer: A Safety Analysis of Retrieval-Augmented Generation for Large Language Models
Bang An
Shiyue Zhang
Mark Dredze
61
0
0
25 Apr 2025
JailbreakEval: An Integrated Toolkit for Evaluating Jailbreak Attempts Against Large Language Models
Delong Ran
Jinyuan Liu
Yichen Gong
Jingyi Zheng
Xinlei He
Tianshuo Cong
Anyu Wang
ELM
47
10
0
13 Jun 2024
A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models
Wenqi Fan
Yujuan Ding
Liang-bo Ning
Shijie Wang
Hengyun Li
Dawei Yin
Tat-Seng Chua
Qing Li
RALM
3DV
40
188
0
10 May 2024
PentestGPT: An LLM-empowered Automatic Penetration Testing Tool
Gelei Deng
Yi Liu
Víctor Mayoral-Vilches
Peng Liu
Yuekang Li
Yuan Xu
Tianwei Zhang
Yang Liu
M. Pinzger
Stefan Rass
LLMAG
20
82
0
13 Aug 2023
1