ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1608.08182
  4. Cited By
Data Poisoning Attacks on Factorization-Based Collaborative Filtering

Data Poisoning Attacks on Factorization-Based Collaborative Filtering

29 August 2016
Bo Li
Yining Wang
Aarti Singh
Yevgeniy Vorobeychik
    AAML
ArXivPDFHTML

Papers citing "Data Poisoning Attacks on Factorization-Based Collaborative Filtering"

50 / 57 papers shown
Title
A Client-level Assessment of Collaborative Backdoor Poisoning in Non-IID Federated Learning
A Client-level Assessment of Collaborative Backdoor Poisoning in Non-IID Federated Learning
Phung Lai
Guanxiong Liu
Hai Phan
Issa M. Khalil
Abdallah Khreishah
Xintao Wu
FedML
36
0
0
17 Apr 2025
CheatAgent: Attacking LLM-Empowered Recommender Systems via LLM Agent
CheatAgent: Attacking LLM-Empowered Recommender Systems via LLM Agent
Liang-bo Ning
Shijie Wang
Wenqi Fan
Qing Li
Xin Xu
Hao Chen
Feiran Huang
AAML
30
17
0
13 Apr 2025
Preventing the Popular Item Embedding Based Attack in Federated Recommendations
Preventing the Popular Item Embedding Based Attack in Federated Recommendations
J. Zhang
Huan Li
Dazhong Rong
Yan Zhao
Ke Chen
Lidan Shou
AAML
80
4
0
18 Feb 2025
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
Yuning Han
Bingyin Zhao
Rui Chu
Feng Luo
Biplab Sikdar
Yingjie Lao
DiffM
AAML
86
1
0
16 Dec 2024
ID-Free Not Risk-Free: LLM-Powered Agents Unveil Risks in ID-Free Recommender Systems
ID-Free Not Risk-Free: LLM-Powered Agents Unveil Risks in ID-Free Recommender Systems
Zehua Wang
Min Gao
Junliang Yu
Xinyi Gao
Quoc Viet Hung Nguyen
S. Sadiq
Hongzhi Yin
AAML
54
3
0
18 Sep 2024
Towards Robust Recommendation: A Review and an Adversarial Robustness Evaluation Library
Towards Robust Recommendation: A Review and an Adversarial Robustness Evaluation Library
Lei Cheng
Xiaowen Huang
Jitao Sang
Jian Yu
AAML
25
1
0
27 Apr 2024
Model Stealing Attack against Recommender System
Model Stealing Attack against Recommender System
Zhihao Zhu
Rui Fan
Chenwang Wu
Yi Yang
Defu Lian
Enhong Chen
AAML
27
2
0
18 Dec 2023
Unveiling Vulnerabilities of Contrastive Recommender Systems to
  Poisoning Attacks
Unveiling Vulnerabilities of Contrastive Recommender Systems to Poisoning Attacks
Zongwei Wang
Junliang Yu
Min Gao
Hongzhi Yin
Bin Cui
S. Sadiq
AAML
34
7
0
30 Nov 2023
Toward Robust Recommendation via Real-time Vicinal Defense
Toward Robust Recommendation via Real-time Vicinal Defense
Yichang Xu
Chenwang Wu
Defu Lian
AAML
18
0
0
29 Sep 2023
Single-User Injection for Invisible Shilling Attack against Recommender
  Systems
Single-User Injection for Invisible Shilling Attack against Recommender Systems
Chengzhi Huang
Hui Li
29
13
0
21 Aug 2023
Protecting Federated Learning from Extreme Model Poisoning Attacks via
  Multidimensional Time Series Anomaly Detection
Protecting Federated Learning from Extreme Model Poisoning Attacks via Multidimensional Time Series Anomaly Detection
Edoardo Gabrielli
Dimitri Belli
Vittorio Miori
Gabriele Tolomei
AAML
13
4
0
29 Mar 2023
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
PORE: Provably Robust Recommender Systems against Data Poisoning Attacks
Jinyuan Jia
Yupei Liu
Yuepeng Hu
Neil Zhenqiang Gong
29
13
0
26 Mar 2023
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning:
  Adversarial Policies for Training-Time Attacks
Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks
Mohammad Mohammadi
Jonathan Nöther
Debmalya Mandal
Adish Singla
Goran Radanović
AAML
OffRL
35
9
0
27 Feb 2023
A Survey of Trustworthy Federated Learning with Perspectives on
  Security, Robustness, and Privacy
A Survey of Trustworthy Federated Learning with Perspectives on Security, Robustness, and Privacy
Yifei Zhang
Dun Zeng
Jinglong Luo
Zenglin Xu
Irwin King
FedML
84
47
0
21 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
FedDebug: Systematic Debugging for Federated Learning Applications
FedDebug: Systematic Debugging for Federated Learning Applications
Waris Gill
A. Anwar
Muhammad Ali Gulzar
FedML
31
11
0
09 Jan 2023
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for
  Federated Learning
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
32
0
0
28 Dec 2022
A Survey on Federated Recommendation Systems
A Survey on Federated Recommendation Systems
Zehua Sun
Yonghui Xu
Yong-Jin Liu
Weiliang He
Lanju Kong
Fangzhao Wu
Y. Jiang
Li-zhen Cui
FedML
29
60
0
27 Dec 2022
FairRoad: Achieving Fairness for Recommender Systems with Optimized
  Antidote Data
FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data
Minghong Fang
Jia-Wei Liu
Michinari Momma
Yi Sun
30
4
0
13 Dec 2022
Training-Time Attacks against k-Nearest Neighbors
Training-Time Attacks against k-Nearest Neighbors
Ara Vartanian
Will Rosenbaum
Scott Alfeld
14
1
0
15 Aug 2022
Integrity Authentication in Tree Models
Integrity Authentication in Tree Models
Weijie Zhao
Yingjie Lao
Ping Li
59
5
0
30 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
25
34
0
13 May 2022
Poisoning Deep Learning Based Recommender Model in Federated Learning
  Scenarios
Poisoning Deep Learning Based Recommender Model in Federated Learning Scenarios
Dazhong Rong
Qinming He
Jianhai Chen
FedML
27
41
0
26 Apr 2022
FedRecAttack: Model Poisoning Attack to Federated Recommendation
FedRecAttack: Model Poisoning Attack to Federated Recommendation
Dazhong Rong
Shuai Ye
Ruoyan Zhao
Hon Ning Yuen
Jianhai Chen
Qinming He
AAML
FedML
24
57
0
01 Apr 2022
How to Backdoor HyperNetwork in Personalized Federated Learning?
How to Backdoor HyperNetwork in Personalized Federated Learning?
Phung Lai
Nhathai Phan
Issa M. Khalil
Abdallah Khreishah
Xintao Wu
AAML
FedML
33
0
0
18 Jan 2022
Trustworthy AI: From Principles to Practices
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
119
356
0
04 Oct 2021
Incentives in Two-sided Matching Markets with Prediction-enhanced
  Preference-formation
Incentives in Two-sided Matching Markets with Prediction-enhanced Preference-formation
Ş. Ionescu
Yuhao Du
K. Joseph
Anikó Hannák
16
2
0
16 Sep 2021
Ready for Emerging Threats to Recommender Systems? A Graph
  Convolution-based Generative Shilling Attack
Ready for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack
Fan Wu
Min Gao
Junliang Yu
Zongwei Wang
Kecheng Liu
Wange Xu
AAML
21
34
0
22 Jul 2021
A BIC-based Mixture Model Defense against Data Poisoning Attacks on
  Classifiers
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
Xi Li
David J. Miller
Zhen Xiang
G. Kesidis
AAML
16
0
0
28 May 2021
Preventing Machine Learning Poisoning Attacks Using Authentication and
  Provenance
Preventing Machine Learning Poisoning Attacks Using Authentication and Provenance
Jack W. Stokes
P. England
K. Kane
AAML
15
14
0
20 May 2021
Turning Federated Learning Systems Into Covert Channels
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
37
10
0
21 Apr 2021
Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Minghong Fang
Minghao Sun
Qi Li
Neil Zhenqiang Gong
Jinhua Tian
Jia-Wei Liu
67
34
0
18 Feb 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
32
26
0
10 Feb 2021
Data Poisoning Attacks to Deep Learning Based Recommender Systems
Data Poisoning Attacks to Deep Learning Based Recommender Systems
Hai Huang
Jiaming Mu
Neil Zhenqiang Gong
Qi Li
Bin Liu
Mingwei Xu
AAML
20
129
0
07 Jan 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
Policy Teaching in Reinforcement Learning via Environment Poisoning
  Attacks
Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks
Amin Rakhsha
Goran Radanović
R. Devidze
Xiaojin Zhu
Adish Singla
AAML
OffRL
28
29
0
21 Nov 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
35
5
0
26 Oct 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Yifan Jiang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
19
215
0
04 Sep 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
21
162
0
22 Jun 2020
Free-rider Attacks on Model Aggregation in Federated Learning
Free-rider Attacks on Model Aggregation in Federated Learning
Yann Fraboni
Richard Vidal
Marco Lorenzi
FedML
14
124
0
21 Jun 2020
Robust Federated Recommendation System
Robust Federated Recommendation System
Chen Chen
Jingfeng Zhang
A. Tung
Mohan Kankanhalli
Gang Chen
FedML
44
26
0
15 Jun 2020
Combined Cleaning and Resampling Algorithm for Multi-Class Imbalanced
  Data with Label Noise
Combined Cleaning and Resampling Algorithm for Multi-Class Imbalanced Data with Label Noise
Michał Koziarski
Michal Wo'zniak
Bartosz Krawczyk
11
113
0
07 Apr 2020
Policy Teaching via Environment Poisoning: Training-time Adversarial
  Attacks against Reinforcement Learning
Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning
Amin Rakhsha
Goran Radanović
R. Devidze
Xiaojin Zhu
Adish Singla
AAML
OffRL
9
120
0
28 Mar 2020
Resilient Distributed Diffusion in Networks with Adversaries
Resilient Distributed Diffusion in Networks with Adversaries
Jiani Li
W. Abbas
X. Koutsoukos
AAML
15
31
0
23 Mar 2020
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems
  With Limited Data
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data
Xinyun Chen
Wenxiao Wang
Chris Bender
Yiming Ding
R. Jia
Bo-wen Li
D. Song
AAML
27
106
0
17 Nov 2019
Data Poisoning Attacks to Local Differential Privacy Protocols
Data Poisoning Attacks to Local Differential Privacy Protocols
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
33
76
0
05 Nov 2019
A Unified Framework for Data Poisoning Attack to Graph-based
  Semi-supervised Learning
A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning
Xuanqing Liu
Si Si
Xiaojin Zhu
Yang Li
Cho-Jui Hsieh
AAML
35
76
0
30 Oct 2019
Policy Poisoning in Batch Reinforcement Learning and Control
Policy Poisoning in Batch Reinforcement Learning and Control
Yuzhe Ma
Xuezhou Zhang
Wen Sun
Xiaojin Zhu
AAML
OffRL
21
109
0
13 Oct 2019
Data Poisoning Attacks on Stochastic Bandits
Data Poisoning Attacks on Stochastic Bandits
Fang Liu
Ness B. Shroff
AAML
13
98
0
16 May 2019
Data Poisoning against Differentially-Private Learners: Attacks and
  Defenses
Data Poisoning against Differentially-Private Learners: Attacks and Defenses
Yuzhe Ma
Xiaojin Zhu
Justin Hsu
SILM
17
157
0
23 Mar 2019
12
Next