Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2406.18122
Cited By
Poisoned LangChain: Jailbreak LLMs by LangChain
26 June 2024
Ziqiu Wang
Jun Liu
Shengkai Zhang
Yang Yang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Poisoned LangChain: Jailbreak LLMs by LangChain"
9 / 9 papers shown
Title
Securing RAG: A Risk Assessment and Mitigation Framework
Lukas Ammann
Sara Ott
Christoph R. Landolt
Marco P. Lehmann
SILM
36
0
0
13 May 2025
Red Teaming the Mind of the Machine: A Systematic Evaluation of Prompt Injection and Jailbreak Vulnerabilities in LLMs
Chetan Pathade
AAML
SILM
59
1
0
07 May 2025
Building Safe GenAI Applications: An End-to-End Overview of Red Teaming for Large Language Models
Alberto Purpura
Sahil Wadhwa
Jesse Zymet
Akshay Gupta
Andy Luo
Melissa Kazemi Rad
Swapnil Shinde
Mohammad Sorower
AAML
236
0
0
03 Mar 2025
Mimicking the Familiar: Dynamic Command Generation for Information Theft Attacks in LLM Tool-Learning System
Ziyou Jiang
Mingyang Li
Guowei Yang
Junjie Wang
Yuekai Huang
Zhiyuan Chang
Qing Wang
AAML
54
1
0
17 Feb 2025
Dynamic Guided and Domain Applicable Safeguards for Enhanced Security in Large Language Models
He Cao
Weidi Luo
Zijing Liu
Yu Wang
Bing Feng
Yuan Yao
Yuan Yao
Yu Li
AAML
61
2
0
23 Oct 2024
FlipAttack: Jailbreak LLMs via Flipping
Yue Liu
Xiaoxin He
Miao Xiong
Jinlan Fu
Shumin Deng
Bryan Hooi
AAML
42
12
0
02 Oct 2024
Operationalizing a Threat Model for Red-Teaming Large Language Models (LLMs)
Apurv Verma
Satyapriya Krishna
Sebastian Gehrmann
Madhavan Seshadri
Anu Pradhan
Tom Ault
Leslie Barrett
David Rabinowitz
John Doucette
Nhathai Phan
59
10
0
20 Jul 2024
InsightLens: Discovering and Exploring Insights from Conversational Contexts in Large-Language-Model-Powered Data Analysis
Luoxuan Weng
Xingbo Wang
Junyu Lu
Yingchaojie Feng
Yihan Liu
Wei Chen
58
1
0
02 Apr 2024
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng
Xiao Liu
Zhengxiao Du
Zihan Wang
Hanyu Lai
...
Jidong Zhai
Wenguang Chen
Peng Zhang
Yuxiao Dong
Jie Tang
BDL
LRM
273
1,077
0
05 Oct 2022
1