ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.03018
  4. Cited By
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

Certified Robustness to Label-Flipping Attacks via Randomized Smoothing

7 February 2020
Elan Rosenfeld
Ezra Winston
Pradeep Ravikumar
J. Zico Kolter
    OOD
    AAML
ArXivPDFHTML

Papers citing "Certified Robustness to Label-Flipping Attacks via Randomized Smoothing"

39 / 39 papers shown
Title
Char-mander Use mBackdoor! A Study of Cross-lingual Backdoor Attacks in Multilingual LLMs
Char-mander Use mBackdoor! A Study of Cross-lingual Backdoor Attacks in Multilingual LLMs
Himanshu Beniwal
Sailesh Panda
Birudugadda Srivibhav
Mayank Singh
50
0
0
24 Feb 2025
On the Promise for Assurance of Differentiable Neurosymbolic Reasoning Paradigms
On the Promise for Assurance of Differentiable Neurosymbolic Reasoning Paradigms
Luke E. Richards
Jessie Yaros
Jasen Babcock
Coung Ly
Robin Cosbey
Timothy Doster
Cynthia Matuszek
NAI
71
0
0
13 Feb 2025
Timber! Poisoning Decision Trees
Timber! Poisoning Decision Trees
Stefano Calzavara
Lorenzo Cazzaro
Massimo Vettori
AAML
35
0
0
01 Oct 2024
Learning from Uncertain Data: From Possible Worlds to Possible Models
Learning from Uncertain Data: From Possible Worlds to Possible Models
Jiongli Zhu
Su Feng
Boris Glavic
Babak Salimi
42
0
0
28 May 2024
Machine Unlearning via Null Space Calibration
Machine Unlearning via Null Space Calibration
Huiqiang Chen
Tianqing Zhu
Xin Yu
Wanlei Zhou
46
6
0
21 Apr 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
36
16
0
02 Feb 2024
PACOL: Poisoning Attacks Against Continual Learners
PACOL: Poisoning Attacks Against Continual Learners
Huayu Li
G. Ditzler
AAML
27
2
0
18 Nov 2023
Purify++: Improving Diffusion-Purification with Advanced Diffusion
  Models and Control of Randomness
Purify++: Improving Diffusion-Purification with Advanced Diffusion Models and Control of Randomness
Boya Zhang
Weijian Luo
Zhihua Zhang
39
10
0
28 Oct 2023
Enhancing the Antidote: Improved Pointwise Certifications against
  Poisoning Attacks
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
AAML
26
3
0
15 Aug 2023
Incremental Randomized Smoothing Certification
Incremental Randomized Smoothing Certification
Shubham Ugare
Tarun Suresh
Debangshu Banerjee
Gagandeep Singh
Sasa Misailovic
AAML
40
8
0
31 May 2023
Defending Against Backdoor Attacks by Layer-wise Feature Analysis
Defending Against Backdoor Attacks by Layer-wise Feature Analysis
N. Jebreel
J. Domingo-Ferrer
Yiming Li
AAML
33
10
0
24 Feb 2023
Universal Soldier: Using Universal Adversarial Perturbations for
  Detecting Backdoor Attacks
Universal Soldier: Using Universal Adversarial Perturbations for Detecting Backdoor Attacks
Xiaoyun Xu
Oguzhan Ersoy
S. Picek
AAML
34
2
0
01 Feb 2023
Threats, Vulnerabilities, and Controls of Machine Learning Based
  Systems: A Survey and Taxonomy
Threats, Vulnerabilities, and Controls of Machine Learning Based Systems: A Survey and Taxonomy
Yusuke Kawamoto
Kazumasa Miyake
K. Konishi
Y. Oiwa
29
4
0
18 Jan 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
47
28
0
03 Jan 2023
Confidence-aware Training of Smoothed Classifiers for Certified
  Robustness
Confidence-aware Training of Smoothed Classifiers for Certified Robustness
Jongheon Jeong
Seojin Kim
Jinwoo Shin
AAML
23
7
0
18 Dec 2022
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
Invariance-Aware Randomized Smoothing Certificates
Invariance-Aware Randomized Smoothing Certificates
Jan Schuchardt
Stephan Günnemann
AAML
30
5
0
25 Nov 2022
On Optimal Learning Under Targeted Data Poisoning
On Optimal Learning Under Targeted Data Poisoning
Steve Hanneke
Amin Karbasi
Mohammad Mahmoody
Idan Mehalel
Shay Moran
AAML
FedML
36
7
0
06 Oct 2022
On the Robustness of Random Forest Against Untargeted Data Poisoning: An
  Ensemble-Based Approach
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based Approach
M. Anisetti
C. Ardagna
Alessandro Balestrucci
Nicola Bena
Ernesto Damiani
C. Yeun
AAML
OOD
34
10
0
28 Sep 2022
Unraveling the Connections between Privacy and Certified Robustness in
  Federated Learning Against Poisoning Attacks
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
63
13
0
08 Sep 2022
Data-free Backdoor Removal based on Channel Lipschitzness
Data-free Backdoor Removal based on Channel Lipschitzness
Runkai Zheng
Rong Tang
Jianze Li
Li Liu
AAML
29
104
0
05 Aug 2022
Certifying Data-Bias Robustness in Linear Regression
Certifying Data-Bias Robustness in Linear Regression
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
35
3
0
07 Jun 2022
On Collective Robustness of Bagging Against Data Poisoning
On Collective Robustness of Bagging Against Data Poisoning
Ruoxin Chen
Zenan Li
Jie Li
Chentao Wu
Junchi Yan
61
23
0
26 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in
  Contrastive Learning
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
33
34
0
13 May 2022
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Limin Yang
Zhi Chen
Jacopo Cortellazzi
Feargus Pendlebury
Kevin Tu
Fabio Pierazzi
Lorenzo Cavallaro
Gang Wang
AAML
31
36
0
11 Feb 2022
Improved Certified Defenses against Data Poisoning with (Deterministic)
  Finite Aggregation
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
Wenxiao Wang
Alexander Levine
S. Feizi
AAML
28
60
0
05 Feb 2022
Datamodels: Predicting Predictions from Training Data
Datamodels: Predicting Predictions from Training Data
Andrew Ilyas
Sung Min Park
Logan Engstrom
Guillaume Leclerc
Aleksander Madry
TDI
57
131
0
01 Feb 2022
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with
  Sparsification
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Ashwinee Panda
Saeed Mahloujifar
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
FedML
AAML
17
85
0
12 Dec 2021
Training Experimentally Robust and Interpretable Binarized Regression
  Models Using Mixed-Integer Programming
Training Experimentally Robust and Interpretable Binarized Regression Models Using Mixed-Integer Programming
Sanjana Tule
Nhi H. Le
B. Say
19
0
0
01 Dec 2021
FROB: Few-shot ROBust Model for Classification and Out-of-Distribution
  Detection
FROB: Few-shot ROBust Model for Classification and Out-of-Distribution Detection
Nikolaos Dionelis
Mehrdad Yaghoobi
Sotirios A. Tsaftaris
OODD
19
4
0
30 Nov 2021
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Dongxian Wu
Yisen Wang
AAML
51
275
0
27 Oct 2021
Certifying Robustness to Programmable Data Bias in Decision Trees
Certifying Robustness to Programmable Data Bias in Decision Trees
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
27
21
0
08 Oct 2021
Exploring Counterfactual Explanations Through the Lens of Adversarial
  Examples: A Theoretical and Empirical Analysis
Exploring Counterfactual Explanations Through the Lens of Adversarial Examples: A Theoretical and Empirical Analysis
Martin Pawelczyk
Chirag Agarwal
Shalmali Joshi
Sohini Upadhyay
Himabindu Lakkaraju
AAML
32
51
0
18 Jun 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
Aleksander Madry
Bo Li
Tom Goldstein
SILM
34
271
0
18 Dec 2020
Certified Robustness of Nearest Neighbors against Data Poisoning and
  Backdoor Attacks
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
40
74
0
07 Dec 2020
Concealed Data Poisoning Attacks on NLP Models
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
29
18
0
23 Oct 2020
SoK: Certified Robustness for Deep Neural Networks
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Bo Li
AAML
38
128
0
09 Sep 2020
Backdoor Learning: A Survey
Backdoor Learning: A Survey
Yiming Li
Yong Jiang
Zhifeng Li
Shutao Xia
AAML
45
592
0
17 Jul 2020
Subpopulation Data Poisoning Attacks
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
24
114
0
24 Jun 2020
1