Title |
|---|
| Name | # Papers | # Citations |
|---|---|---|
| Date | Location | Event |
|---|---|---|
LLM security is the investigation of the failure modes of LLMs in use, the conditions that lead to them, and their mitigations. The failure modes include the vulnerabilities of LLM to leak sensitive information or inappropriate contents, inclusion of trojan samples on the web such that an LLM is trained on them to eventually show inappropriate or dangerous behaviours at their deployment, or various potential misuse of LLMs to cause harms and pursue illegal activities.
Title |
|---|
Title | |||
|---|---|---|---|
Prevalence of Security and Privacy Risk-Inducing Usage of AI-based Conversational Agents Kathrin Grosse Nico Ebert | |||
![]() Secure Retrieval-Augmented Generation against Poisoning Attacks Zirui Cheng Jikai Sun Anjun Gao Yueyang Quan Zhuqing Liu Xiaohua Hu Minghong Fang | |||
![]() S3C2 Summit 2025-03: Industry Secure Supply Chain Summit Elizabeth Lin Jonah Ghebremichael William Enck Yasemin Acar Michel Cukier Alexandros Kapravelos Christian Kastner Laurie Williams | |||
![]() Do Chatbots Walk the Talk of Responsible AI? Susan Ariel Aaronson Michael Moreno | |||
![]() RefleXGen:The unexamined code is not worth usingIEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2025 | |||
| Name (-) |
|---|
| Name (-) |
|---|
| Name (-) |
|---|
| Date | Location | Event | |
|---|---|---|---|
| No social events available | |||