ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1802.03471
  4. Cited By
Certified Robustness to Adversarial Examples with Differential Privacy

Certified Robustness to Adversarial Examples with Differential Privacy

9 February 2018
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
    SILM
    AAML
ArXivPDFHTML

Papers citing "Certified Robustness to Adversarial Examples with Differential Privacy"

50 / 208 papers shown
Title
Trustworthy AI: From Principles to Practices
Trustworthy AI: From Principles to Practices
Bo-wen Li
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
119
356
0
04 Oct 2021
Local Intrinsic Dimensionality Signals Adversarial Perturbations
Local Intrinsic Dimensionality Signals Adversarial Perturbations
Sandamal Weerasinghe
T. Alpcan
S. Erfani
C. Leckie
Benjamin I. P. Rubinstein
AAML
23
0
0
24 Sep 2021
CC-Cert: A Probabilistic Approach to Certify General Robustness of
  Neural Networks
CC-Cert: A Probabilistic Approach to Certify General Robustness of Neural Networks
Mikhail Aleksandrovich Pautov
Nurislam Tursynbek
Marina Munkhoeva
Nikita Muravev
Aleksandr Petiushko
Ivan Oseledets
AAML
52
16
0
22 Sep 2021
An automatic differentiation system for the age of differential privacy
An automatic differentiation system for the age of differential privacy
Dmitrii Usynin
Alexander Ziller
Moritz Knolle
Andrew Trask
Kritika Prakash
Daniel Rueckert
Georgios Kaissis
30
3
0
22 Sep 2021
PatchCleanser: Certifiably Robust Defense against Adversarial Patches
  for Any Image Classifier
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier
Chong Xiang
Saeed Mahloujifar
Prateek Mittal
VLM
AAML
24
73
0
20 Aug 2021
Advances in adversarial attacks and defenses in computer vision: A
  survey
Advances in adversarial attacks and defenses in computer vision: A survey
Naveed Akhtar
Ajmal Mian
Navid Kardan
M. Shah
AAML
31
236
0
01 Aug 2021
On the Certified Robustness for Ensemble Models and Beyond
On the Certified Robustness for Ensemble Models and Beyond
Zhuolin Yang
Linyi Li
Xiaojun Xu
B. Kailkhura
Tao Xie
Bo-wen Li
AAML
29
48
0
22 Jul 2021
ROPUST: Improving Robustness through Fine-tuning with Photonic
  Processors and Synthetic Gradients
ROPUST: Improving Robustness through Fine-tuning with Photonic Processors and Synthetic Gradients
Alessandro Cappelli
Julien Launay
Laurent Meunier
Ruben Ohana
Iacopo Poli
AAML
24
4
0
06 Jul 2021
Smoothed Differential Privacy
Smoothed Differential Privacy
Ao Liu
Yu-Xiang Wang
Lirong Xia
33
0
0
04 Jul 2021
Scalable Certified Segmentation via Randomized Smoothing
Scalable Certified Segmentation via Randomized Smoothing
Marc Fischer
Maximilian Baader
Martin Vechev
18
38
0
01 Jul 2021
Certified Robustness via Randomized Smoothing over Multiplicative
  Parameters of Input Transformations
Certified Robustness via Randomized Smoothing over Multiplicative Parameters of Input Transformations
Nikita Muravev
Aleksandr Petiushko
AAML
18
7
0
28 Jun 2021
Policy Smoothing for Provably Robust Reinforcement Learning
Policy Smoothing for Provably Robust Reinforcement Learning
Aounon Kumar
Alexander Levine
S. Feizi
AAML
20
56
0
21 Jun 2021
Adversarial Training Helps Transfer Learning via Better Representations
Adversarial Training Helps Transfer Learning via Better Representations
Zhun Deng
Linjun Zhang
Kailas Vodrahalli
Kenji Kawaguchi
James Zou
GAN
36
53
0
18 Jun 2021
Large Scale Private Learning via Low-rank Reparametrization
Large Scale Private Learning via Low-rank Reparametrization
Da Yu
Huishuai Zhang
Wei Chen
Jian Yin
Tie-Yan Liu
26
100
0
17 Jun 2021
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
  based Perception in Autonomous Driving Under Physical-World Attacks
Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion based Perception in Autonomous Driving Under Physical-World Attacks
Yulong Cao*
Ningfei Wang*
Chaowei Xiao
Dawei Yang
Jin Fang
Ruigang Yang
Qi Alfred Chen
Mingyan D. Liu
Bo-wen Li
AAML
24
218
0
17 Jun 2021
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
Chulin Xie
Minghao Chen
Pin-Yu Chen
Bo-wen Li
FedML
36
165
0
15 Jun 2021
Adversarial Robustness via Fisher-Rao Regularization
Adversarial Robustness via Fisher-Rao Regularization
Marine Picot
Francisco Messina
Malik Boudiaf
Fabrice Labeau
Ismail Ben Ayed
Pablo Piantanida
AAML
28
23
0
12 Jun 2021
NoiLIn: Improving Adversarial Training and Correcting Stereotype of
  Noisy Labels
NoiLIn: Improving Adversarial Training and Correcting Stereotype of Noisy Labels
Jingfeng Zhang
Xilie Xu
Bo Han
Tongliang Liu
Gang Niu
Li-zhen Cui
Masashi Sugiyama
NoLa
AAML
23
9
0
31 May 2021
Random Noise Defense Against Query-Based Black-Box Attacks
Random Noise Defense Against Query-Based Black-Box Attacks
Zeyu Qin
Yanbo Fan
H. Zha
Baoyuan Wu
AAML
27
59
0
23 Apr 2021
Adversarial Training is Not Ready for Robot Learning
Adversarial Training is Not Ready for Robot Learning
Mathias Lechner
Ramin Hasani
Radu Grosu
Daniela Rus
T. Henzinger
AAML
38
34
0
15 Mar 2021
Membership Inference Attacks on Machine Learning: A Survey
Membership Inference Attacks on Machine Learning: A Survey
Hongsheng Hu
Z. Salcic
Lichao Sun
Gillian Dobbie
Philip S. Yu
Xuyun Zhang
MIACV
35
412
0
14 Mar 2021
DP-Image: Differential Privacy for Image Data in Feature Space
DP-Image: Differential Privacy for Image Data in Feature Space
Hanyu Xue
Bo Liu
Ming Ding
Tianqing Zhu
Dayong Ye
Li-Na Song
Wanlei Zhou
17
33
0
12 Mar 2021
PRIMA: General and Precise Neural Network Certification via Scalable
  Convex Hull Approximations
PRIMA: General and Precise Neural Network Certification via Scalable Convex Hull Approximations
Mark Niklas Muller
Gleb Makarchuk
Gagandeep Singh
Markus Püschel
Martin Vechev
41
90
0
05 Mar 2021
Towards Evaluating the Robustness of Deep Diagnostic Models by
  Adversarial Attack
Towards Evaluating the Robustness of Deep Diagnostic Models by Adversarial Attack
Mengting Xu
Tao Zhang
Zhongnian Li
Mingxia Liu
Daoqiang Zhang
AAML
OOD
MedIm
33
41
0
05 Mar 2021
A Multiclass Boosting Framework for Achieving Fast and Provable
  Adversarial Robustness
A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness
Jacob D. Abernethy
Pranjal Awasthi
Satyen Kale
AAML
27
6
0
01 Mar 2021
Globally-Robust Neural Networks
Globally-Robust Neural Networks
Klas Leino
Zifan Wang
Matt Fredrikson
AAML
OOD
80
126
0
16 Feb 2021
Low Curvature Activations Reduce Overfitting in Adversarial Training
Low Curvature Activations Reduce Overfitting in Adversarial Training
Vasu Singla
Sahil Singla
David Jacobs
S. Feizi
AAML
32
45
0
15 Feb 2021
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
  Learning Models
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
17
125
0
04 Feb 2021
DiPSeN: Differentially Private Self-normalizing Neural Networks For
  Adversarial Robustness in Federated Learning
DiPSeN: Differentially Private Self-normalizing Neural Networks For Adversarial Robustness in Federated Learning
Olakunle Ibitoye
M. O. Shafiq
Ashraf Matrawy
FedML
28
18
0
08 Jan 2021
Advances in Electron Microscopy with Deep Learning
Advances in Electron Microscopy with Deep Learning
Jeffrey M. Ede
35
2
0
04 Jan 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks,
  and Defenses
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
A. Madry
Bo-wen Li
Tom Goldstein
SILM
27
270
0
18 Dec 2020
Robustness Threats of Differential Privacy
Robustness Threats of Differential Privacy
Nurislam Tursynbek
Aleksandr Petiushko
Ivan Oseledets
AAML
27
14
0
14 Dec 2020
DSRNA: Differentiable Search of Robust Neural Architectures
DSRNA: Differentiable Search of Robust Neural Architectures
Ramtin Hosseini
Xingyi Yang
P. Xie
OOD
AAML
29
50
0
11 Dec 2020
Data-Dependent Randomized Smoothing
Data-Dependent Randomized Smoothing
Motasem Alfarra
Adel Bibi
Philip Torr
Guohao Li
UQCV
28
34
0
08 Dec 2020
How Robust are Randomized Smoothing based Defenses to Data Poisoning?
How Robust are Randomized Smoothing based Defenses to Data Poisoning?
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
OOD
AAML
20
32
0
02 Dec 2020
Almost Tight L0-norm Certified Robustness of Top-k Predictions against
  Adversarial Perturbations
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Hongbin Liu
Neil Zhenqiang Gong
16
24
0
15 Nov 2020
Reliable Graph Neural Networks via Robust Aggregation
Reliable Graph Neural Networks via Robust Aggregation
Simon Geisler
Daniel Zügner
Stephan Günnemann
AAML
OOD
6
71
0
29 Oct 2020
Mitigating Sybil Attacks on Differential Privacy based Federated
  Learning
Mitigating Sybil Attacks on Differential Privacy based Federated Learning
Yupeng Jiang
Yong Li
Yipeng Zhou
Xi Zheng
FedML
AAML
29
15
0
20 Oct 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
234
680
0
19 Oct 2020
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
29
48
0
19 Oct 2020
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
  and Learning
A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack and Learning
Hongjun Wang
Guanbin Li
Xiaobai Liu
Liang Lin
GAN
AAML
21
22
0
15 Oct 2020
Higher-Order Certification for Randomized Smoothing
Higher-Order Certification for Randomized Smoothing
Jeet Mohapatra
Ching-Yun Ko
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Luca Daniel
AAML
20
44
0
13 Oct 2020
Adversarial Boot Camp: label free certified robustness in one epoch
Adversarial Boot Camp: label free certified robustness in one epoch
Ryan Campbell
Chris Finlay
Adam M. Oberman
AAML
25
0
0
05 Oct 2020
Query complexity of adversarial attacks
Query complexity of adversarial attacks
Grzegorz Gluch
R. Urbanke
AAML
24
5
0
02 Oct 2020
Optimal Provable Robustness of Quantum Classification via Quantum
  Hypothesis Testing
Optimal Provable Robustness of Quantum Classification via Quantum Hypothesis Testing
Maurice Weber
Nana Liu
Bo-wen Li
Ce Zhang
Zhikuan Zhao
AAML
37
28
0
21 Sep 2020
Adversarial Training with Stochastic Weight Average
Adversarial Training with Stochastic Weight Average
Joong-won Hwang
Youngwan Lee
Sungchan Oh
Yuseok Bae
OOD
AAML
29
11
0
21 Sep 2020
Review: Deep Learning in Electron Microscopy
Review: Deep Learning in Electron Microscopy
Jeffrey M. Ede
36
79
0
17 Sep 2020
Certifying Confidence via Randomized Smoothing
Certifying Confidence via Randomized Smoothing
Aounon Kumar
Alexander Levine
S. Feizi
Tom Goldstein
UQCV
33
39
0
17 Sep 2020
SoK: Certified Robustness for Deep Neural Networks
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Bo-wen Li
AAML
33
128
0
09 Sep 2020
Robust Machine Learning via Privacy/Rate-Distortion Theory
Robust Machine Learning via Privacy/Rate-Distortion Theory
Ye Wang
Shuchin Aeron
Adnan Siraj Rakin
T. Koike-Akino
P. Moulin
OOD
22
6
0
22 Jul 2020
Previous
12345
Next