Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2212.13675
Cited By
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
28 December 2022
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAML
FedML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning"
42 / 42 papers shown
Title
Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Minghong Fang
Minghao Sun
Qi Li
Neil Zhenqiang Gong
Jinhua Tian
Jia-Wei Liu
77
36
0
18 Feb 2021
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
136
626
0
27 Dec 2020
DeepSweep: An Evaluation Framework for Mitigating DNN Backdoor Attacks using Data Augmentation
Han Qiu
Yi Zeng
Shangwei Guo
Tianwei Zhang
Meikang Qiu
B. Thuraisingham
AAML
47
191
0
13 Dec 2020
CLEANN: Accelerated Trojan Shield for Embedded Neural Networks
Mojan Javaheripi
Mohammad Samragh
Gregory Fields
T. Javidi
F. Koushanfar
AAML
FedML
14
42
0
04 Sep 2020
One-pixel Signature: Characterizing CNN Models for Backdoor Detection
Shanjiaoyang Huang
Weiqi Peng
Zhiwei Jia
Zhuowen Tu
8
63
0
18 Aug 2020
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
73
595
0
17 Jul 2020
Attack of the Tails: Yes, You Really Can Backdoor Federated Learning
Hongyi Wang
Kartik K. Sreenivasan
Shashank Rajput
Harit Vishwakarma
Saurabh Agarwal
Jy-yong Sohn
Kangwook Lee
Dimitris Papailiopoulos
FedML
47
595
0
09 Jul 2020
The Future of Digital Health with Federated Learning
Nicola Rieke
Jonny Hancox
Wenqi Li
Fausto Milletari
H. Roth
...
Ronald M. Summers
Andrew Trask
Daguang Xu
Maximilian Baust
M. Jorge Cardoso
OOD
231
1,746
0
18 Mar 2020
Robust Aggregation for Federated Learning
Krishna Pillutla
Sham Kakade
Zaïd Harchaoui
FedML
66
644
0
31 Dec 2019
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedML
AI4CE
98
6,177
0
10 Dec 2019
Deep Probabilistic Models to Detect Data Poisoning Attacks
Mahesh Subedar
Nilesh A. Ahuja
R. Krishnan
I. Ndiour
Omesh Tickoo
AAML
TDI
17
23
0
03 Dec 2019
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Minghong Fang
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
OOD
FedML
92
1,093
0
26 Nov 2019
Can You Really Backdoor Federated Learning?
Ziteng Sun
Peter Kairouz
A. Suresh
H. B. McMahan
FedML
56
565
0
18 Nov 2019
NeuronInspect: Detecting Backdoors in Neural Networks via Output Explanations
Xijie Huang
M. Alzantot
Mani B. Srivastava
AAML
32
105
0
18 Nov 2019
Robust Anomaly Detection and Backdoor Attack Detection Via Differential Privacy
Min Du
R. Jia
D. Song
AAML
41
175
0
16 Nov 2019
Detecting AI Trojans Using Meta Neural Analysis
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Yue Liu
52
322
0
08 Oct 2019
Measuring the Effects of Non-Identical Data Distribution for Federated Visual Classification
T. Hsu
Qi
Matthew Brown
FedML
106
1,128
0
13 Sep 2019
Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems
Bao Gia Doan
Ehsan Abbasnejad
Damith C. Ranasinghe
AAML
34
66
0
09 Aug 2019
Model Agnostic Defence against Backdoor Attacks in Machine Learning
Sakshi Udeshi
Shanshan Peng
Gerald Woo
Lionell Loh
Louth Rawshan
Sudipta Chattopadhyay
AAML
29
104
0
06 Aug 2019
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
50
232
0
26 Jun 2019
Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation
Cong Xie
Oluwasanmi Koyejo
Indranil Gupta
FedML
AAML
19
253
0
10 Mar 2019
Attacking Graph-based Classification via Manipulating the Graph Structure
Binghui Wang
Neil Zhenqiang Gong
AAML
58
155
0
01 Mar 2019
STRIP: A Defence Against Trojan Attacks on Deep Neural Networks
Yansong Gao
Chang Xu
Derui Wang
Shiping Chen
Damith C. Ranasinghe
Surya Nepal
AAML
51
801
0
18 Feb 2019
A Little Is Enough: Circumventing Defenses For Distributed Learning
Moran Baruch
Gilad Baruch
Yoav Goldberg
FedML
26
496
0
16 Feb 2019
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations
Jacob Dumford
Walter J. Scheirer
AAML
39
117
0
07 Dec 2018
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
244
1,044
0
29 Nov 2018
RSA: Byzantine-Robust Stochastic Aggregation Methods for Distributed Learning from Heterogeneous Datasets
Liping Li
Canran Xu
Xiangnan He
Yixin Cao
Tat-Seng Chua
FedML
90
591
0
09 Nov 2018
Poisoning Attacks to Graph-Based Recommender Systems
Minghong Fang
Guolei Yang
Neil Zhenqiang Gong
Jia-Wei Liu
AAML
49
204
0
11 Sep 2018
Backdoor Embedding in Convolutional Neural Network Models via Invisible Perturbation
C. Liao
Haoti Zhong
Anna Squicciarini
Sencun Zhu
David J. Miller
SILM
67
312
0
30 Aug 2018
How To Backdoor Federated Learning
Eugene Bagdasaryan
Andreas Veit
Yiqing Hua
D. Estrin
Vitaly Shmatikov
SILM
FedML
67
1,892
0
02 Jul 2018
Is feature selection secure against training data poisoning?
Huang Xiao
Battista Biggio
Gavin Brown
Giorgio Fumera
Claudia Eckert
Fabio Roli
AAML
SILM
36
423
0
21 Apr 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
73
1,080
0
03 Apr 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
57
757
0
01 Apr 2018
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
67
287
0
19 Mar 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
78
1,822
0
15 Dec 2017
Neural Trojans
Yuntao Liu
Yang Xie
Ankur Srivastava
AAML
39
351
0
03 Oct 2017
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
85
628
0
29 Aug 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
72
1,754
0
22 Aug 2017
Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný
H. B. McMahan
Felix X. Yu
Peter Richtárik
A. Suresh
Dave Bacon
FedML
267
4,620
0
18 Oct 2016
Data Poisoning Attacks on Factorization-Based Collaborative Filtering
Bo Li
Yining Wang
Aarti Singh
Yevgeniy Vorobeychik
AAML
53
341
0
29 Aug 2016
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. B. McMahan
Eider Moore
Daniel Ramage
S. Hampson
Blaise Agüera y Arcas
FedML
226
17,235
0
17 Feb 2016
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
77
1,580
0
27 Jun 2012
1