Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1809.04127
Cited By
Poisoning Attacks to Graph-Based Recommender Systems
11 September 2018
Minghong Fang
Guolei Yang
Neil Zhenqiang Gong
Jia-Wei Liu
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Poisoning Attacks to Graph-Based Recommender Systems"
32 / 32 papers shown
Title
Preventing the Popular Item Embedding Based Attack in Federated Recommendations
J. Zhang
Huan Li
Dazhong Rong
Yan Zhao
Ke Chen
Lidan Shou
AAML
80
4
0
18 Feb 2025
Towards Robust Recommendation: A Review and an Adversarial Robustness Evaluation Library
Lei Cheng
Xiaowen Huang
Jitao Sang
Jian Yu
AAML
25
1
0
27 Apr 2024
Fooling Neural Networks for Motion Forecasting via Adversarial Attacks
Edgar Medina
Leyong Loh
AAML
32
0
0
07 Mar 2024
Unveiling Vulnerabilities of Contrastive Recommender Systems to Poisoning Attacks
Zongwei Wang
Junliang Yu
Min Gao
Hongzhi Yin
Bin Cui
S. Sadiq
AAML
34
7
0
30 Nov 2023
Toward Robust Recommendation via Real-time Vicinal Defense
Yichang Xu
Chenwang Wu
Defu Lian
AAML
18
0
0
29 Sep 2023
Single-User Injection for Invisible Shilling Attack against Recommender Systems
Chengzhi Huang
Hui Li
29
13
0
21 Aug 2023
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
Jinyuan Jia
Yupei Liu
Yuepeng Hu
Neil Zhenqiang Gong
29
13
0
26 Mar 2023
A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness, and Privacy
Yifei Zhang
Dun Zeng
Jinglong Luo
Zenglin Xu
Irwin King
FedML
84
47
0
21 Feb 2023
Analysis of Label-Flip Poisoning Attack on Machine Learning Based Malware Detector
Kshitiz Aryal
Maanak Gupta
Mahmoud Abdelsalam
AAML
26
18
0
03 Jan 2023
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
32
0
0
28 Dec 2022
A Survey on Federated Recommendation Systems
Zehua Sun
Yonghui Xu
Yong-Jin Liu
Weiliang He
Lanju Kong
Fangzhao Wu
Y. Jiang
Li-zhen Cui
FedML
29
60
0
27 Dec 2022
FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data
Minghong Fang
Jia-Wei Liu
Michinari Momma
Yi Sun
30
4
0
13 Dec 2022
Federated Learning based on Defending Against Data Poisoning Attacks in IoT
Jiayin Li
Wenzhong Guo
Xingshuo Han
Jianping Cai
Ximeng Liu
AAML
83
1
0
14 Sep 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios
Dazhong Rong
Qinming He
Jianhai Chen
FedML
27
41
0
26 Apr 2022
FedRecAttack: Model Poisoning Attack to Federated Recommendation
Dazhong Rong
Shuai Ye
Ruoyan Zhao
Hon Ning Yuen
Jianhai Chen
Qinming He
AAML
FedML
24
57
0
01 Apr 2022
Projective Ranking-based GNN Evasion Attacks
He Zhang
Xingliang Yuan
Chuan Zhou
Shirui Pan
AAML
42
23
0
25 Feb 2022
PipAttack: Poisoning Federated Recommender Systems forManipulating Item Promotion
Shijie Zhang
Hongzhi Yin
Tong Chen
Zi Huang
Quoc Viet Hung Nguyen
Li-zhen Cui
FedML
AAML
16
96
0
21 Oct 2021
Ready for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack
Fan Wu
Min Gao
Junliang Yu
Zongwei Wang
Kecheng Liu
Wange Xu
AAML
21
34
0
22 Jul 2021
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
104
196
0
12 Jul 2021
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
37
10
0
21 Apr 2021
Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Minghong Fang
Minghao Sun
Qi Li
Neil Zhenqiang Gong
Jinhua Tian
Jia-Wei Liu
67
34
0
18 Feb 2021
Data Poisoning Attacks to Deep Learning Based Recommender Systems
Hai Huang
Jiaming Mu
Neil Zhenqiang Gong
Qi Li
Bin Liu
Mingwei Xu
AAML
20
129
0
07 Jan 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff
Eitan Borgnia
Valeriia Cherepanova
Liam H. Fowl
Amin Ghiasi
Jonas Geiping
Micah Goldblum
Tom Goldstein
Arjun Gupta
AAML
22
127
0
18 Nov 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
35
5
0
26 Oct 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Yifan Jiang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
19
215
0
04 Sep 2020
Data Poisoning Attacks Against Federated Learning Systems
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
28
639
0
16 Jul 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
21
162
0
22 Jun 2020
Adversarial Item Promotion: Vulnerabilities at the Core of Top-N Recommenders that Use Images to Address Cold Start
Zhuoran Liu
Martha Larson
DiffM
20
27
0
02 Jun 2020
Data Poisoning Attacks to Local Differential Privacy Protocols
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
33
76
0
05 Nov 2019
Attacking Graph-based Classification via Manipulating the Graph Structure
Binghui Wang
Neil Zhenqiang Gong
AAML
28
152
0
01 Mar 2019
1