ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2006.14768
  4. Cited By
Deep Partition Aggregation: Provable Defense against General Poisoning
  Attacks

Deep Partition Aggregation: Provable Defense against General Poisoning Attacks

26 June 2020
Alexander Levine
S. Feizi
    AAML
ArXivPDFHTML

Papers citing "Deep Partition Aggregation: Provable Defense against General Poisoning Attacks"

32 / 32 papers shown
Title
Cert-SSB: Toward Certified Sample-Specific Backdoor Defense
Cert-SSB: Toward Certified Sample-Specific Backdoor Defense
Ting Qiao
Yixuan Wang
Xing Liu
Sixing Wu
Jianbing Li
Yiming Li
AAML
SILM
66
0
0
30 Apr 2025
Frontier AI's Impact on the Cybersecurity Landscape
Frontier AI's Impact on the Cybersecurity Landscape
Wenbo Guo
Yujin Potter
Tianneng Shi
Zhun Wang
Andy Zhang
Dawn Song
52
2
0
07 Apr 2025
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging
Game-Theoretic Defenses for Robust Conformal Prediction Against Adversarial Attacks in Medical Imaging
Rui Luo
Jie Bao
Zhixin Zhou
Chuangyin Dang
MedIm
AAML
37
5
0
07 Nov 2024
Timber! Poisoning Decision Trees
Timber! Poisoning Decision Trees
Stefano Calzavara
Lorenzo Cazzaro
Massimo Vettori
AAML
27
0
0
01 Oct 2024
Model-agnostic clean-label backdoor mitigation in cybersecurity environments
Model-agnostic clean-label backdoor mitigation in cybersecurity environments
Giorgio Severi
Simona Boboila
J. Holodnak
K. Kratkiewicz
Rauf Izmailov
Alina Oprea
Alina Oprea
AAML
35
1
0
11 Jul 2024
PureEBM: Universal Poison Purification via Mid-Run Dynamics of
  Energy-Based Models
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated
  AI-enabled Critical Infrastructure
Leakage-Resilient and Carbon-Neutral Aggregation Featuring the Federated AI-enabled Critical Infrastructure
Zehang Deng
Ruoxi Sun
Minhui Xue
Sheng Wen
S. Çamtepe
Surya Nepal
Yang Xiang
42
1
0
24 May 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
31
16
0
02 Feb 2024
Enhancing the Antidote: Improved Pointwise Certifications against
  Poisoning Attacks
Enhancing the Antidote: Improved Pointwise Certifications against Poisoning Attacks
Shijie Liu
Andrew C. Cullen
Paul Montague
S. Erfani
Benjamin I. P. Rubinstein
AAML
23
3
0
15 Aug 2023
Adversarial Clean Label Backdoor Attacks and Defenses on Text
  Classification Systems
Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems
Ashim Gupta
Amrith Krishna
AAML
22
16
0
31 May 2023
CUDA: Convolution-based Unlearnable Datasets
CUDA: Convolution-based Unlearnable Datasets
Vinu Sankar Sadasivan
Mahdi Soltanolkotabi
S. Feizi
MU
29
25
0
07 Mar 2023
Certified Robust Neural Networks: Generalization and Corruption
  Resistance
Certified Robust Neural Networks: Generalization and Corruption Resistance
Amine Bennouna
Ryan Lucas
Bart P. G. Van Parys
38
10
0
03 Mar 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
Pre-trained Encoders in Self-Supervised Learning Improve Secure and
  Privacy-preserving Supervised Learning
Pre-trained Encoders in Self-Supervised Learning Improve Secure and Privacy-preserving Supervised Learning
Hongbin Liu
Wenjie Qu
Jinyuan Jia
Neil Zhenqiang Gong
SSL
28
6
0
06 Dec 2022
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
  Learning
FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning
Kaiyuan Zhang
Guanhong Tao
Qiuling Xu
Shuyang Cheng
Shengwei An
...
Shiwei Feng
Guangyu Shen
Pin-Yu Chen
Shiqing Ma
Xiangyu Zhang
FedML
42
51
0
23 Oct 2022
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets
BAFFLE: Hiding Backdoors in Offline Reinforcement Learning Datasets
Chen Gong
Zhou Yang
Yunru Bai
Junda He
Jieke Shi
...
Arunesh Sinha
Bowen Xu
Xinwen Hou
David Lo
Guoliang Fan
AAML
OffRL
18
7
0
07 Oct 2022
On Optimal Learning Under Targeted Data Poisoning
On Optimal Learning Under Targeted Data Poisoning
Steve Hanneke
Amin Karbasi
Mohammad Mahmoody
Idan Mehalel
Shay Moran
AAML
FedML
30
7
0
06 Oct 2022
On the Robustness of Random Forest Against Untargeted Data Poisoning: An
  Ensemble-Based Approach
On the Robustness of Random Forest Against Untargeted Data Poisoning: An Ensemble-Based Approach
M. Anisetti
C. Ardagna
Alessandro Balestrucci
Nicola Bena
Ernesto Damiani
C. Yeun
AAML
OOD
26
10
0
28 Sep 2022
Unraveling the Connections between Privacy and Certified Robustness in
  Federated Learning Against Poisoning Attacks
Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks
Chulin Xie
Yunhui Long
Pin-Yu Chen
Qinbin Li
Arash Nourian
Sanmi Koyejo
Bo Li
FedML
40
13
0
08 Sep 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against
  Data Poisoning Attacks
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
24
27
0
14 Aug 2022
Adapting and Evaluating Influence-Estimation Methods for
  Gradient-Boosted Decision Trees
Adapting and Evaluating Influence-Estimation Methods for Gradient-Boosted Decision Trees
Jonathan Brophy
Zayd Hammoudeh
Daniel Lowd
TDI
27
22
0
30 Apr 2022
COPA: Certifying Robust Policies for Offline Reinforcement Learning
  against Poisoning Attacks
COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks
Fan Wu
Linyi Li
Chejian Xu
Huan Zhang
B. Kailkhura
K. Kenthapadi
Ding Zhao
Bo-wen Li
AAML
OffRL
24
34
0
16 Mar 2022
Robustly-reliable learners under poisoning attacks
Robustly-reliable learners under poisoning attacks
Maria-Florina Balcan
Avrim Blum
Steve Hanneke
Dravyansh Sharma
AAML
OOD
26
14
0
08 Mar 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence
  Estimation
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
21
28
0
25 Jan 2022
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with
  Sparsification
SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification
Ashwinee Panda
Saeed Mahloujifar
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
FedML
AAML
9
84
0
12 Dec 2021
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Adversarial Neuron Pruning Purifies Backdoored Deep Models
Dongxian Wu
Yisen Wang
AAML
19
275
0
27 Oct 2021
Certifying Robustness to Programmable Data Bias in Decision Trees
Certifying Robustness to Programmable Data Bias in Decision Trees
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
27
21
0
08 Oct 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
38
16
0
20 Sep 2021
A BIC-based Mixture Model Defense against Data Poisoning Attacks on
  Classifiers
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
Xi Li
David J. Miller
Zhen Xiang
G. Kesidis
AAML
16
0
0
28 May 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
18
270
0
18 Dec 2020
Concealed Data Poisoning Attacks on NLP Models
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
16
18
0
23 Oct 2020
SoK: Certified Robustness for Deep Neural Networks
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Bo-wen Li
AAML
27
128
0
09 Sep 2020
1