ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.00240
  4. Cited By
Exploring Vulnerabilities and Protections in Large Language Models: A
  Survey

Exploring Vulnerabilities and Protections in Large Language Models: A Survey

1 June 2024
Frank Weizhen Liu
Chenhui Hu
    AAML
ArXivPDFHTML

Papers citing "Exploring Vulnerabilities and Protections in Large Language Models: A Survey"

10 / 10 papers shown
Title
POISONCRAFT: Practical Poisoning of Retrieval-Augmented Generation for Large Language Models
POISONCRAFT: Practical Poisoning of Retrieval-Augmented Generation for Large Language Models
Yangguang Shao
Xinjie Lin
Haozheng Luo
Chengshang Hou
G. Xiong
Jiahao Yu
Junzheng Shi
SILM
52
0
0
10 May 2025
LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures
LLM Security: Vulnerabilities, Attacks, Defenses, and Countermeasures
Francisco Aguilera-Martínez
Fernando Berzal
PILM
55
0
0
02 May 2025
Adversarial Attacks on LLM-as-a-Judge Systems: Insights from Prompt Injections
Adversarial Attacks on LLM-as-a-Judge Systems: Insights from Prompt Injections
Narek Maloyan
Dmitry Namiot
SILM
AAML
ELM
88
0
0
25 Apr 2025
eFedLLM: Efficient LLM Inference Based on Federated Learning
eFedLLM: Efficient LLM Inference Based on Federated Learning
Shengwen Ding
Chenhui Hu
FedML
87
1
0
24 Nov 2024
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Bridging Today and the Future of Humanity: AI Safety in 2024 and Beyond
Shanshan Han
87
1
0
09 Oct 2024
Hallucinating AI Hijacking Attack: Large Language Models and Malicious
  Code Recommenders
Hallucinating AI Hijacking Attack: Large Language Models and Malicious Code Recommenders
David Noever
Forrest McKee
AAML
56
0
0
09 Oct 2024
Gemma: Open Models Based on Gemini Research and Technology
Gemma: Open Models Based on Gemini Research and Technology
Gemma Team
Gemma Team Thomas Mesnard
Cassidy Hardin
Robert Dadashi
Surya Bhupatiraju
...
Armand Joulin
Noah Fiedel
Evan Senter
Alek Andreev
Kathleen Kenealy
VLM
LLMAG
131
434
0
13 Mar 2024
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in
  Language Models
Prompt as Triggers for Backdoor Attack: Examining the Vulnerability in Language Models
Shuai Zhao
Jinming Wen
Anh Tuan Luu
Jun Zhao
Jie Fu
SILM
62
89
0
02 May 2023
Poisoning Language Models During Instruction Tuning
Poisoning Language Models During Instruction Tuning
Alexander Wan
Eric Wallace
Sheng Shen
Dan Klein
SILM
106
124
0
01 May 2023
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges
  and Future Research Directions
Backdoor Attacks and Defenses in Federated Learning: Survey, Challenges and Future Research Directions
Thuy-Dung Nguyen
Tuan Nguyen
Phi Le Nguyen
Hieu H. Pham
Khoa D. Doan
Kok-Seng Wong
AAML
FedML
40
56
0
03 Mar 2023
1