ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1809.04127
  4. Cited By
Poisoning Attacks to Graph-Based Recommender Systems

Poisoning Attacks to Graph-Based Recommender Systems

11 September 2018
Minghong Fang
Guolei Yang
Neil Zhenqiang Gong
Jia-Wei Liu
    AAML
ArXiv (abs)PDFHTML

Papers citing "Poisoning Attacks to Graph-Based Recommender Systems"

47 / 47 papers shown
Title
Preventing the Popular Item Embedding Based Attack in Federated Recommendations
Preventing the Popular Item Embedding Based Attack in Federated Recommendations
Junxuan Zhang
Huan Li
Dazhong Rong
Yan Zhao
Ke Chen
Lidan Shou
AAML
146
4
0
18 Feb 2025
Towards Robust Recommendation: A Review and an Adversarial Robustness Evaluation Library
Towards Robust Recommendation: A Review and an Adversarial Robustness Evaluation Library
Lei Cheng
Xiaowen Huang
Jitao Sang
Jian Yu
AAML
103
1
0
27 Apr 2024
Fooling Neural Networks for Motion Forecasting via Adversarial Attacks
Fooling Neural Networks for Motion Forecasting via Adversarial Attacks
Edgar Medina
Leyong Loh
AAML
66
0
0
07 Mar 2024
Preference Poisoning Attacks on Reward Model Learning
Preference Poisoning Attacks on Reward Model Learning
Junlin Wu
Jiong Wang
Chaowei Xiao
Chenguang Wang
Ning Zhang
Yevgeniy Vorobeychik
AAML
73
6
0
02 Feb 2024
Shilling Black-box Review-based Recommender Systems through Fake Review
  Generation
Shilling Black-box Review-based Recommender Systems through Fake Review Generation
Hung-Yun Chiang
Yi-Syuan Chen
Yun-Zhu Song
Hong-Han Shuai
Jason J. S. Chang
AAML
68
13
0
27 Jun 2023
Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework
Securing Visually-Aware Recommender Systems: An Adversarial Image Reconstruction and Detection Framework
Minglei Yin
Bin Liu
Neil Zhenqiang Gong
Xin Li
AAML
53
1
0
11 Jun 2023
A Survey of Trustworthy Federated Learning with Perspectives on
  Security, Robustness, and Privacy
A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness, and Privacy
Yifei Zhang
Dun Zeng
Jinglong Luo
Zenglin Xu
Irwin King
FedML
156
49
0
21 Feb 2023
Analysis of Label-Flip Poisoning Attack on Machine Learning Based
  Malware Detector
Analysis of Label-Flip Poisoning Attack on Machine Learning Based Malware Detector
Kshitiz Aryal
Maanak Gupta
Mahmoud Abdelsalam
AAML
55
19
0
03 Jan 2023
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for
  Federated Learning
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAMLFedML
87
1
0
28 Dec 2022
A Survey on Federated Recommendation Systems
A Survey on Federated Recommendation Systems
Zehua Sun
Yonghui Xu
Yang Liu
Weiliang He
Lanju Kong
Fangzhao Wu
Yiheng Jiang
Li-zhen Cui
FedML
113
68
0
27 Dec 2022
FairRoad: Achieving Fairness for Recommender Systems with Optimized
  Antidote Data
FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data
Minghong Fang
Jia-Wei Liu
Michinari Momma
Yi Sun
71
4
0
13 Dec 2022
AFLGuard: Byzantine-robust Asynchronous Federated Learning
AFLGuard: Byzantine-robust Asynchronous Federated Learning
Minghong Fang
Jia-Wei Liu
Neil Zhenqiang Gong
Elizabeth S. Bentley
AAML
80
28
0
13 Dec 2022
Untargeted Attack against Federated Recommendation Systems via Poisonous
  Item Embeddings and the Defense
Untargeted Attack against Federated Recommendation Systems via Poisonous Item Embeddings and the Defense
Yang Yu
Qi Liu
Likang Wu
Runlong Yu
Sanshi Lei Yu
Zaixin Zhang
FedML
70
50
0
11 Dec 2022
A Comprehensive Survey on Trustworthy Recommender Systems
A Comprehensive Survey on Trustworthy Recommender Systems
Wenqi Fan
Xiangyu Zhao
Xiao Chen
Jingran Su
Jingtong Gao
...
Qidong Liu
Yiqi Wang
Hanfeng Xu
Lei Chen
Qing Li
FaML
107
48
0
21 Sep 2022
Federated Learning based on Defending Against Data Poisoning Attacks in
  IoT
Federated Learning based on Defending Against Data Poisoning Attacks in IoT
Jiayin Li
Wenzhong Guo
Xingshuo Han
Jianping Cai
Ximeng Liu
AAML
127
1
0
14 Sep 2022
Detect Professional Malicious User with Metric Learning in Recommender
  Systems
Detect Professional Malicious User with Metric Learning in Recommender Systems
Yuanbo Xu
Yongjian Yang
E. Wang
Fuzhen Zhuang
Hui Xiong
59
12
0
19 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
67
36
0
13 May 2022
Poisoning Deep Learning Based Recommender Model in Federated Learning
  Scenarios
Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios
Dazhong Rong
Qinming He
Jianhai Chen
FedML
86
43
0
26 Apr 2022
FedRecAttack: Model Poisoning Attack to Federated Recommendation
FedRecAttack: Model Poisoning Attack to Federated Recommendation
Dazhong Rong
Shuai Ye
Ruoyan Zhao
Hon Ning Yuen
Jianhai Chen
Qinming He
AAMLFedML
99
60
0
01 Apr 2022
Projective Ranking-based GNN Evasion Attacks
Projective Ranking-based GNN Evasion Attacks
He Zhang
Lizhen Qu
Chuan Zhou
Shirui Pan
AAML
87
24
0
25 Feb 2022
Rank List Sensitivity of Recommender Systems to Interaction
  Perturbations
Rank List Sensitivity of Recommender Systems to Interaction Perturbations
Sejoon Oh
Berk Ustun
Julian McAuley
Srijan Kumar
68
36
0
29 Jan 2022
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value
  Data
Poisoning Attacks to Local Differential Privacy Protocols for Key-Value Data
Yongji Wu
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
73
34
0
22 Nov 2021
Blockchain-based Recommender Systems: Applications, Challenges and
  Future Opportunities
Blockchain-based Recommender Systems: Applications, Challenges and Future Opportunities
Yassine Himeur
A. Sayed
A. Alsalemi
F. Bensaali
Abbes Amira
Iraklis Varlamis
Magdalini Eirinaki
Christos Sardianos
G. Dimitrakopoulos
73
86
0
22 Nov 2021
PipAttack: Poisoning Federated Recommender Systems forManipulating Item
  Promotion
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion
Shijie Zhang
Hongzhi Yin
Tong Chen
Zi Huang
Quoc Viet Hung Nguyen
Li-zhen Cui
FedMLAAML
83
98
0
21 Oct 2021
Ready for Emerging Threats to Recommender Systems? A Graph
  Convolution-based Generative Shilling Attack
Ready for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack
Fan Wu
Min Gao
Junliang Yu
Zongwei Wang
Kecheng Liu
Wange Xu
AAML
80
35
0
22 Jul 2021
Trustworthy AI: A Computational Perspective
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
192
212
0
12 Jul 2021
Turning Federated Learning Systems Into Covert Channels
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
70
12
0
21 Apr 2021
Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Minghong Fang
Minghao Sun
Qi Li
Neil Zhenqiang Gong
Jinhua Tian
Jia-Wei Liu
111
36
0
18 Feb 2021
Data Poisoning Attacks to Deep Learning Based Recommender Systems
Data Poisoning Attacks to Deep Learning Based Recommender Systems
Hai Huang
Jiaming Mu
Neil Zhenqiang Gong
Qi Li
Bin Liu
Mingwei Xu
AAML
90
132
0
07 Jan 2021
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
186
658
0
27 Dec 2020
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
129
282
0
18 Dec 2020
Certified Robustness of Nearest Neighbors against Data Poisoning and
  Backdoor Attacks
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
112
75
0
07 Dec 2020
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks
  Without an Accuracy Tradeoff
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Eitan Borgnia
Valeriia Cherepanova
Liam H. Fowl
Amin Ghiasi
Jonas Geiping
Micah Goldblum
Tom Goldstein
Arjun Gupta
AAML
75
129
0
18 Nov 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
64
5
0
26 Oct 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
100
222
0
04 Sep 2020
Revisiting Adversarially Learned Injection Attacks Against Recommender
  Systems
Revisiting Adversarially Learned Injection Attacks Against Recommender Systems
Jiaxi Tang
Hongyi Wen
Ke Wang
AAML
60
83
0
11 Aug 2020
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
Jinyuan Jia
Xiaoyu Cao
Neil Zhenqiang Gong
SILM
75
135
0
11 Aug 2020
Data Poisoning Attacks Against Federated Learning Systems
Data Poisoning Attacks Against Federated Learning Systems
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
128
667
0
16 Jul 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAMLTDI
114
164
0
22 Jun 2020
With Great Dispersion Comes Greater Resilience: Efficient Poisoning
  Attacks and Defenses for Linear Regression Models
With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Linear Regression Models
Jialin Wen
Benjamin Zi Hao Zhao
Minhui Xue
Alina Oprea
Hai-feng Qian
AAML
70
20
0
21 Jun 2020
Influence Function based Data Poisoning Attacks to Top-N Recommender
  Systems
Influence Function based Data Poisoning Attacks to Top-N Recommender Systems
Minghong Fang
Neil Zhenqiang Gong
Jia-Wei Liu
TDI
97
155
0
19 Feb 2020
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Minghong Fang
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAMLOODFedML
128
1,128
0
26 Nov 2019
Data Poisoning Attacks to Local Differential Privacy Protocols
Data Poisoning Attacks to Local Differential Privacy Protocols
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
108
78
0
05 Nov 2019
Defending against Machine Learning based Inference Attacks via
  Adversarial Examples: Opportunities and Challenges
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges
Jinyuan Jia
Neil Zhenqiang Gong
AAMLSILM
85
17
0
17 Sep 2019
Attacking Graph-based Classification via Manipulating the Graph
  Structure
Attacking Graph-based Classification via Manipulating the Graph Structure
Binghui Wang
Neil Zhenqiang Gong
AAML
102
156
0
01 Mar 2019
Adversarial Attack and Defense on Graph Data: A Survey
Adversarial Attack and Defense on Graph Data: A Survey
Lichao Sun
Yingtong Dou
Carl Yang
Ji Wang
Yixin Liu
Philip S. Yu
Lifang He
Yangqiu Song
GNNAAML
139
286
0
26 Dec 2018
POTs: Protective Optimization Technologies
POTs: Protective Optimization Technologies
B. Kulynych
R. Overdorf
Carmela Troncoso
Seda F. Gürses
97
97
0
07 Jun 2018
1