ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16367
  4. Cited By
Chain-of-Thought Poisoning Attacks against R1-based Retrieval-Augmented Generation Systems

Chain-of-Thought Poisoning Attacks against R1-based Retrieval-Augmented Generation Systems

22 May 2025
Hongru Song
Yu-an Liu
Ruqing Zhang
Jiafeng Guo
Yixing Fan
    AAMLSILMLRM
ArXiv (abs)PDFHTML

Papers citing "Chain-of-Thought Poisoning Attacks against R1-based Retrieval-Augmented Generation Systems"

10 / 10 papers shown
Title
Robust Neural Information Retrieval: An Adversarial and
  Out-of-distribution Perspective
Robust Neural Information Retrieval: An Adversarial and Out-of-distribution Perspective
Yu-An Liu
Ruqing Zhang
Jiafeng Guo
Maarten de Rijke
Yixing Fan
Xueqi Cheng
118
11
0
09 Jul 2024
Ground Every Sentence: Improving Retrieval-Augmented LLMs with Interleaved Reference-Claim Generation
Ground Every Sentence: Improving Retrieval-Augmented LLMs with Interleaved Reference-Claim Generation
Sirui Xia
Xintao Wang
Jiaqing Liang
Yifei Zhang
Weikang Zhou
Jiaji Deng
Fei Yu
Yanghua Xiao
RALM
159
8
0
01 Jul 2024
Typos that Broke the RAG's Back: Genetic Attack on RAG Pipeline by
  Simulating Documents in the Wild via Low-level Perturbations
Typos that Broke the RAG's Back: Genetic Attack on RAG Pipeline by Simulating Documents in the Wild via Low-level Perturbations
Sukmin Cho
Soyeong Jeong
Jeongyeon Seo
Taeho Hwang
Jong C. Park
SILMAAML
107
33
0
22 Apr 2024
Perturbation-Invariant Adversarial Training for Neural Ranking Models:
  Improving the Effectiveness-Robustness Trade-Off
Perturbation-Invariant Adversarial Training for Neural Ranking Models: Improving the Effectiveness-Robustness Trade-Off
Yuansan Liu
Ruqing Zhang
Mingkun Zhang
Wei Chen
Maarten de Rijke
Jiafeng Guo
Xueqi Cheng
AAML
63
10
0
16 Dec 2023
Black-box Adversarial Attacks against Dense Retrieval Models: A
  Multi-view Contrastive Learning Method
Black-box Adversarial Attacks against Dense Retrieval Models: A Multi-view Contrastive Learning Method
Yuansan Liu
Ruqing Zhang
Jiafeng Guo
Maarten de Rijke
Wei Chen
Yixing Fan
Xueqi Cheng
AAML
77
29
0
19 Aug 2023
Jailbroken: How Does LLM Safety Training Fail?
Jailbroken: How Does LLM Safety Training Fail?
Alexander Wei
Nika Haghtalab
Jacob Steinhardt
236
1,005
0
05 Jul 2023
Order-Disorder: Imitation Adversarial Attacks for Black-box Neural
  Ranking Models
Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models
Jiawei Liu
Yangyang Kang
Di Tang
Kaisong Song
Changlong Sun
Xiaofeng Wang
Wei Lu
Xiaozhong Liu
AAML
116
42
0
14 Sep 2022
DocPrompting: Generating Code by Retrieving the Docs
DocPrompting: Generating Code by Retrieving the Docs
Shuyan Zhou
Uri Alon
Frank F. Xu
Zhiruo Wang
Zhengbao Jiang
Graham Neubig
LLMAG
104
141
0
13 Jul 2022
Leveraging Passage Retrieval with Generative Models for Open Domain
  Question Answering
Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
Gautier Izacard
Edouard Grave
RALM
162
1,190
0
02 Jul 2020
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
Payal Bajaj
Daniel Fernando Campos
Nick Craswell
Li Deng
Jianfeng Gao
...
Mir Rosenberg
Xia Song
Alina Stoica
Saurabh Tiwary
Tong Wang
RALM
275
2,750
0
28 Nov 2016
1