ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.00992
  4. Cited By
Label Sanitization against Label Flipping Poisoning Attacks
v1v2 (latest)

Label Sanitization against Label Flipping Poisoning Attacks

2 March 2018
Andrea Paudice
Luis Muñoz-González
Emil C. Lupu
    AAML
ArXiv (abs)PDFHTML

Papers citing "Label Sanitization against Label Flipping Poisoning Attacks"

22 / 72 papers shown
Title
A BIC-based Mixture Model Defense against Data Poisoning Attacks on
  Classifiers
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
Xi Li
David J. Miller
Zhen Xiang
G. Kesidis
AAML
38
0
0
28 May 2021
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
Jian Chen
Xuxin Zhang
Rui Zhang
Chen Wang
Ling Liu
AAML
72
88
0
08 May 2021
Influence Based Defense Against Data Poisoning Attacks in Online
  Learning
Influence Based Defense Against Data Poisoning Attacks in Online Learning
Sanjay Seetharaman
Shubham Malaviya
KV Rosni
Manish Shukla
S. Lodha
TDIAAML
114
9
0
24 Apr 2021
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks
Defending Against Adversarial Denial-of-Service Data Poisoning Attacks
Nicolas Müller
Simon Roschmann
Konstantin Böttinger
AAML
54
0
0
14 Apr 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
97
8
0
16 Mar 2021
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with
  Differentially Private Data Augmentations
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations
Eitan Borgnia
Jonas Geiping
Valeriia Cherepanova
Liam H. Fowl
Arjun Gupta
Amin Ghiasi
Furong Huang
Micah Goldblum
Tom Goldstein
106
46
0
02 Mar 2021
Robust Android Malware Detection System against Adversarial Attacks
  using Q-Learning
Robust Android Malware Detection System against Adversarial Attacks using Q-Learning
Hemant Rathore
S. K. Sahay
Piyush Nikam
Mohit Sewak
AAML
95
62
0
27 Jan 2021
Active Learning Under Malicious Mislabeling and Poisoning Attacks
Active Learning Under Malicious Mislabeling and Poisoning Attacks
Jing Lin
R. Luley
Kaiqi Xiong
AAML
83
8
0
01 Jan 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
144
283
0
18 Dec 2020
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised
  Classifiers
Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers
Adriano Franci
Maxime Cordy
Martin Gubri
Mike Papadakis
Yves Le Traon
AAML
55
6
0
14 Dec 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Wenjie Huang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
100
222
0
04 Sep 2020
Data Poisoning Attacks Against Federated Learning Systems
Data Poisoning Attacks Against Federated Learning Systems
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
128
669
0
16 Jul 2020
Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN
  Approach
Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN Approach
Hu Ding
Fan Yang
Jiawei Huang
AAML
41
0
0
14 Jun 2020
Arms Race in Adversarial Malware Detection: A Survey
Arms Race in Adversarial Malware Detection: A Survey
Deqiang Li
Qianmu Li
Yanfang Ye
Shouhuai Xu
AAML
103
52
0
24 May 2020
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on
  Multiobjective Bilevel Optimisation
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation
Javier Carnerero-Cano
Luis Muñoz-González
P. Spencer
Emil C. Lupu
AAML
108
11
0
28 Feb 2020
FR-Train: A Mutual Information-Based Approach to Fair and Robust
  Training
FR-Train: A Mutual Information-Based Approach to Fair and Robust Training
Yuji Roh
Kangwook Lee
Steven Euijong Whang
Changho Suh
95
79
0
24 Feb 2020
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Elan Rosenfeld
Ezra Winston
Pradeep Ravikumar
J. Zico Kolter
OODAAML
94
159
0
07 Feb 2020
On Defending Against Label Flipping Attacks on Malware Detection Systems
On Defending Against Label Flipping Attacks on Malware Detection Systems
R. Taheri
R. Javidan
Mohammad Shojafar
Zahra Pooranian
A. Miri
Mauro Conti
AAML
85
92
0
13 Aug 2019
Poisoning Attacks with Generative Adversarial Nets
Poisoning Attacks with Generative Adversarial Nets
Luis Muñoz-González
Bjarne Pfitzner
Matteo Russo
Javier Carnerero-Cano
Emil C. Lupu
AAML
88
64
0
18 Jun 2019
Skeptical Deep Learning with Distribution Correction
Skeptical Deep Learning with Distribution Correction
Mingxiao An
Yongzhou Chen
Qi Liu
Chuanren Liu
Guangyi Lv
Fangzhao Wu
Jianhui Ma
NoLa
37
0
0
09 Nov 2018
Formal Verification of Neural Network Controlled Autonomous Systems
Formal Verification of Neural Network Controlled Autonomous Systems
Xiaowu Sun
Haitham Khedr
Yasser Shoukry
103
140
0
31 Oct 2018
Security and Privacy Issues in Deep Learning
Security and Privacy Issues in Deep Learning
Ho Bae
Jaehee Jang
Dahuin Jung
Hyemi Jang
Heonseok Ha
Hyungyu Lee
Sungroh Yoon
SILMMIACV
147
79
0
31 Jul 2018
Previous
12