ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.10561
  4. Cited By
Turning Federated Learning Systems Into Covert Channels

Turning Federated Learning Systems Into Covert Channels

21 April 2021
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
    FedML
ArXivPDFHTML

Papers citing "Turning Federated Learning Systems Into Covert Channels"

44 / 44 papers shown
Title
FedComm: Federated Learning as a Medium for Covert Communication
FedComm: Federated Learning as a Medium for Covert Communication
Dorjan Hitaj
Giulio Pagnotta
Briland Hitaj
Fernando Perez-Cruz
L. Mancini
FedML
39
10
0
21 Jan 2022
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
56
274
0
18 Dec 2020
VenoMave: Targeted Poisoning Against Speech Recognition
VenoMave: Targeted Poisoning Against Speech Recognition
H. Aghakhani
Lea Schonherr
Thorsten Eisenhofer
D. Kolossa
Thorsten Holz
Christopher Kruegel
Giovanni Vigna
AAML
18
17
0
21 Oct 2020
Data Poisoning Attacks Against Federated Learning Systems
Data Poisoning Attacks Against Federated Learning Systems
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
97
646
0
16 Jul 2020
Data Poisoning Attacks on Federated Machine Learning
Data Poisoning Attacks on Federated Machine Learning
Gan Sun
Yang Cong
Jiahua Dong
Qiang Wang
Ji Liu
FedML
AAML
31
205
0
19 Apr 2020
Threats to Federated Learning: A Survey
Threats to Federated Learning: A Survey
Lingjuan Lyu
Han Yu
Qiang Yang
FedML
256
437
0
04 Mar 2020
Personalized Federated Learning for Intelligent IoT Applications: A
  Cloud-Edge based Framework
Personalized Federated Learning for Intelligent IoT Applications: A Cloud-Edge based Framework
Qiong Wu
Kaiwen He
Xu Chen
41
282
0
25 Feb 2020
Advances and Open Problems in Federated Learning
Advances and Open Problems in Federated Learning
Peter Kairouz
H. B. McMahan
Brendan Avent
A. Bellet
M. Bennis
...
Zheng Xu
Qiang Yang
Felix X. Yu
Han Yu
Sen Zhao
FedML
AI4CE
98
6,177
0
10 Dec 2019
PyTorch: An Imperative Style, High-Performance Deep Learning Library
PyTorch: An Imperative Style, High-Performance Deep Learning Library
Adam Paszke
Sam Gross
Francisco Massa
Adam Lerer
James Bradbury
...
Sasank Chilamkurthy
Benoit Steiner
Lu Fang
Junjie Bai
Soumith Chintala
ODL
211
42,038
0
03 Dec 2019
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Minghong Fang
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
OOD
FedML
92
1,093
0
26 Nov 2019
Federated Learning with Differential Privacy: Algorithms and Performance
  Analysis
Federated Learning with Differential Privacy: Algorithms and Performance Analysis
Kang Wei
Jun Li
Ming Ding
Chuan Ma
Heng Yang
Farokhi Farhad
Shi Jin
Tony Q.S. Quek
H. Vincent Poor
FedML
84
1,589
0
01 Nov 2019
FedHealth: A Federated Transfer Learning Framework for Wearable
  Healthcare
FedHealth: A Federated Transfer Learning Framework for Wearable Healthcare
Yiqiang Chen
Jindong Wang
Chaohui Yu
Wen Gao
Xin Qin
FedML
58
710
0
22 Jul 2019
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Chen Zhu
Wenjie Huang
Ali Shafahi
Hengduo Li
Gavin Taylor
Christoph Studer
Tom Goldstein
59
284
0
15 May 2019
Backdooring Convolutional Neural Networks via Targeted Weight
  Perturbations
Backdooring Convolutional Neural Networks via Targeted Weight Perturbations
Jacob Dumford
Walter J. Scheirer
AAML
39
117
0
07 Dec 2018
Applied Federated Learning: Improving Google Keyboard Query Suggestions
Applied Federated Learning: Improving Google Keyboard Query Suggestions
Timothy Yang
Galen Andrew
Hubert Eichner
Haicheng Sun
Wei Li
Nicholas Kong
Daniel Ramage
F. Beaufays
FedML
66
624
0
07 Dec 2018
Model-Reuse Attacks on Deep Learning Systems
Model-Reuse Attacks on Deep Learning Systems
Yujie Ji
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
SILM
AAML
166
186
0
02 Dec 2018
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
244
1,044
0
29 Nov 2018
Exploring Adversarial Examples in Malware Detection
Exploring Adversarial Examples in Malware Detection
Octavian Suciu
Scott E. Coull
Jeffrey Johns
AAML
37
190
0
18 Oct 2018
Morpho-MNIST: Quantitative Assessment and Diagnostics for Representation
  Learning
Morpho-MNIST: Quantitative Assessment and Diagnostics for Representation Learning
Daniel Coelho De Castro
Jeremy Tan
Bernhard Kainz
E. Konukoglu
Ben Glocker
DRL
22
70
0
27 Sep 2018
Poisoning Attacks to Graph-Based Recommender Systems
Poisoning Attacks to Graph-Based Recommender Systems
Minghong Fang
Guolei Yang
Neil Zhenqiang Gong
Jia-Wei Liu
AAML
49
204
0
11 Sep 2018
Universal Multi-Party Poisoning Attacks
Universal Multi-Party Poisoning Attacks
Saeed Mahloujifar
Mohammad Mahmoody
Ameer Mohammed
AAML
34
44
0
10 Sep 2018
Mitigating Sybils in Federated Learning Poisoning
Mitigating Sybils in Federated Learning Poisoning
Clement Fung
Chris J. M. Yoon
Ivan Beschastnikh
AAML
30
498
0
14 Aug 2018
How To Backdoor Federated Learning
How To Backdoor Federated Learning
Eugene Bagdasaryan
Andreas Veit
Yiqing Hua
D. Estrin
Vitaly Shmatikov
SILM
FedML
67
1,892
0
02 Jul 2018
Exploiting Unintended Feature Leakage in Collaborative Learning
Exploiting Unintended Feature Leakage in Collaborative Learning
Luca Melis
Congzheng Song
Emiliano De Cristofaro
Vitaly Shmatikov
FedML
128
1,461
0
10 May 2018
Is feature selection secure against training data poisoning?
Is feature selection secure against training data poisoning?
Huang Xiao
Battista Biggio
Gavin Brown
Giorgio Fumera
Claudia Eckert
Fabio Roli
AAML
SILM
36
423
0
21 Apr 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
73
1,080
0
03 Apr 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
57
757
0
01 Apr 2018
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
67
287
0
19 Mar 2018
Stealing Hyperparameters in Machine Learning
Stealing Hyperparameters in Machine Learning
Binghui Wang
Neil Zhenqiang Gong
AAML
121
461
0
14 Feb 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Yue Liu
Kimberly Lu
D. Song
AAML
SILM
78
1,822
0
15 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
85
1,401
0
08 Dec 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
72
1,754
0
22 Aug 2017
Knock Knock, Who's There? Membership Inference on Aggregate Location
  Data
Knock Knock, Who's There? Membership Inference on Aggregate Location Data
Apostolos Pyrgelis
Carmela Troncoso
Emiliano De Cristofaro
MIACV
90
269
0
21 Aug 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
93
2,140
0
21 Aug 2017
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLR
MIALM
MIACV
200
4,075
0
18 Oct 2016
Stealing Machine Learning Models via Prediction APIs
Stealing Machine Learning Models via Prediction APIs
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
SILM
MLAU
66
1,798
0
09 Sep 2016
Data Poisoning Attacks on Factorization-Based Collaborative Filtering
Data Poisoning Attacks on Factorization-Based Collaborative Filtering
Bo Li
Yining Wang
Aarti Singh
Yevgeniy Vorobeychik
AAML
53
341
0
29 Aug 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
157
8,497
0
16 Aug 2016
Communication-Efficient Learning of Deep Networks from Decentralized
  Data
Communication-Efficient Learning of Deep Networks from Decentralized Data
H. B. McMahan
Eider Moore
Daniel Ramage
S. Hampson
Blaise Agüera y Arcas
FedML
226
17,235
0
17 Feb 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAU
AAML
44
3,656
0
08 Feb 2016
Very Deep Convolutional Networks for Large-Scale Image Recognition
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan
Andrew Zisserman
FAtt
MDE
775
99,991
0
04 Sep 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
157
14,831
1
21 Dec 2013
Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data
  from Machine Learning Classifiers
Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers
G. Ateniese
G. Felici
L. Mancini
A. Spognardi
Antonio Villani
Domenico Vitali
57
458
0
19 Jun 2013
Poisoning Attacks against Support Vector Machines
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
77
1,580
0
27 Jun 2012
1