ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.09344
  4. Cited By
Certified Defenses against Adversarial Examples

Certified Defenses against Adversarial Examples

29 January 2018
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
    AAML
ArXivPDFHTML

Papers citing "Certified Defenses against Adversarial Examples"

50 / 250 papers shown
Title
Practical Convex Formulation of Robust One-hidden-layer Neural Network
  Training
Practical Convex Formulation of Robust One-hidden-layer Neural Network Training
Yatong Bai
Tanmay Gautam
Yujie Gai
Somayeh Sojoudi
AAML
27
3
0
25 May 2021
A Review of Formal Methods applied to Machine Learning
A Review of Formal Methods applied to Machine Learning
Caterina Urban
Antoine Miné
39
55
0
06 Apr 2021
Adversarial YOLO: Defense Human Detection Patch Attacks via Detecting
  Adversarial Patches
Adversarial YOLO: Defense Human Detection Patch Attacks via Detecting Adversarial Patches
Nan Ji
YanFei Feng
Haidong Xie
Xueshuang Xiang
Naijin Liu
AAML
58
33
0
16 Mar 2021
Adversarial Training is Not Ready for Robot Learning
Adversarial Training is Not Ready for Robot Learning
Mathias Lechner
Ramin Hasani
Radu Grosu
Daniela Rus
T. Henzinger
AAML
38
34
0
15 Mar 2021
A Multiclass Boosting Framework for Achieving Fast and Provable
  Adversarial Robustness
A Multiclass Boosting Framework for Achieving Fast and Provable Adversarial Robustness
Jacob D. Abernethy
Pranjal Awasthi
Satyen Kale
AAML
27
6
0
01 Mar 2021
Resilient Machine Learning for Networked Cyber Physical Systems: A
  Survey for Machine Learning Security to Securing Machine Learning for CPS
Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS
Felix O. Olowononi
D. Rawat
Chunmei Liu
36
133
0
14 Feb 2021
On the Paradox of Certified Training
On the Paradox of Certified Training
Nikola Jovanović
Mislav Balunović
Maximilian Baader
Martin Vechev
OOD
28
13
0
12 Feb 2021
Fast Training of Provably Robust Neural Networks by SingleProp
Fast Training of Provably Robust Neural Networks by SingleProp
Akhilan Boopathy
Tsui-Wei Weng
Sijia Liu
Pin-Yu Chen
Gaoyuan Zhang
Luca Daniel
AAML
11
7
0
01 Feb 2021
A Comprehensive Evaluation Framework for Deep Model Robustness
A Comprehensive Evaluation Framework for Deep Model Robustness
Jun Guo
Wei Bao
Jiakai Wang
Yuqing Ma
Xing Gao
Gang Xiao
Aishan Liu
Zehao Zhao
Xianglong Liu
Wenjun Wu
AAML
ELM
38
55
0
24 Jan 2021
Fundamental Tradeoffs in Distributionally Adversarial Training
Fundamental Tradeoffs in Distributionally Adversarial Training
M. Mehrabi
Adel Javanmard
Ryan A. Rossi
Anup B. Rao
Tung Mai
AAML
20
18
0
15 Jan 2021
A Study on the Uncertainty of Convolutional Layers in Deep Neural
  Networks
A Study on the Uncertainty of Convolutional Layers in Deep Neural Networks
Hao Shen
Sihong Chen
Ran Wang
30
5
0
27 Nov 2020
Certified Monotonic Neural Networks
Certified Monotonic Neural Networks
Xingchao Liu
Xing Han
Na Zhang
Qiang Liu
24
79
0
20 Nov 2020
Adversarially Robust Classification based on GLRT
Adversarially Robust Classification based on GLRT
Bhagyashree Puranik
Upamanyu Madhow
Ramtin Pedarsani
VLM
AAML
23
4
0
16 Nov 2020
Almost Tight L0-norm Certified Robustness of Top-k Predictions against
  Adversarial Perturbations
Almost Tight L0-norm Certified Robustness of Top-k Predictions against Adversarial Perturbations
Jinyuan Jia
Binghui Wang
Xiaoyu Cao
Hongbin Liu
Neil Zhenqiang Gong
16
24
0
15 Nov 2020
Adversarial Robust Training of Deep Learning MRI Reconstruction Models
Adversarial Robust Training of Deep Learning MRI Reconstruction Models
Francesco Calivá
Kaiyang Cheng
Rutwik Shah
V. Pedoia
OOD
AAML
MedIm
30
10
0
30 Oct 2020
Precise Statistical Analysis of Classification Accuracies for
  Adversarial Training
Precise Statistical Analysis of Classification Accuracies for Adversarial Training
Adel Javanmard
Mahdi Soltanolkotabi
AAML
33
62
0
21 Oct 2020
Evaluating the Safety of Deep Reinforcement Learning Models using
  Semi-Formal Verification
Evaluating the Safety of Deep Reinforcement Learning Models using Semi-Formal Verification
Davide Corsi
Enrico Marchesini
Alessandro Farinelli
OffRL
14
2
0
19 Oct 2020
Viewmaker Networks: Learning Views for Unsupervised Representation
  Learning
Viewmaker Networks: Learning Views for Unsupervised Representation Learning
Alex Tamkin
Mike Wu
Noah D. Goodman
SSL
28
64
0
14 Oct 2020
Adversarial Boot Camp: label free certified robustness in one epoch
Adversarial Boot Camp: label free certified robustness in one epoch
Ryan Campbell
Chris Finlay
Adam M. Oberman
AAML
25
0
0
05 Oct 2020
Block-wise Image Transformation with Secret Key for Adversarially Robust
  Defense
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense
Maungmaung Aprilpyone
Hitoshi Kiya
29
57
0
02 Oct 2020
Utility is in the Eye of the User: A Critique of NLP Leaderboards
Utility is in the Eye of the User: A Critique of NLP Leaderboards
Kawin Ethayarajh
Dan Jurafsky
ELM
24
51
0
29 Sep 2020
Deep Learning & Software Engineering: State of Research and Future
  Directions
Deep Learning & Software Engineering: State of Research and Future Directions
P. Devanbu
Matthew B. Dwyer
Sebastian G. Elbaum
M. Lowry
Kevin Moran
Denys Poshyvanyk
Baishakhi Ray
Rishabh Singh
Xiangyu Zhang
11
22
0
17 Sep 2020
Defending Against Multiple and Unforeseen Adversarial Videos
Defending Against Multiple and Unforeseen Adversarial Videos
Shao-Yuan Lo
Vishal M. Patel
AAML
31
23
0
11 Sep 2020
SoK: Certified Robustness for Deep Neural Networks
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Bo-wen Li
AAML
33
128
0
09 Sep 2020
Rethinking Non-idealities in Memristive Crossbars for Adversarial
  Robustness in Neural Networks
Rethinking Non-idealities in Memristive Crossbars for Adversarial Robustness in Neural Networks
Abhiroop Bhattacharjee
Priyadarshini Panda
AAML
28
19
0
25 Aug 2020
Robust Deep Reinforcement Learning through Adversarial Loss
Robust Deep Reinforcement Learning through Adversarial Loss
Tuomas P. Oikarinen
Wang Zhang
Alexandre Megretski
Luca Daniel
Tsui-Wei Weng
AAML
44
94
0
05 Aug 2020
Robust Machine Learning via Privacy/Rate-Distortion Theory
Robust Machine Learning via Privacy/Rate-Distortion Theory
Ye Wang
Shuchin Aeron
Adnan Siraj Rakin
T. Koike-Akino
P. Moulin
OOD
22
6
0
22 Jul 2020
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers
AdvFoolGen: Creating Persistent Troubles for Deep Classifiers
Yuzhen Ding
Nupur Thakur
Baoxin Li
AAML
24
3
0
20 Jul 2020
Do Adversarially Robust ImageNet Models Transfer Better?
Do Adversarially Robust ImageNet Models Transfer Better?
Hadi Salman
Andrew Ilyas
Logan Engstrom
Ashish Kapoor
A. Madry
37
417
0
16 Jul 2020
Understanding Adversarial Examples from the Mutual Influence of Images
  and Perturbations
Understanding Adversarial Examples from the Mutual Influence of Images and Perturbations
Chaoning Zhang
Philipp Benz
Tooba Imtiaz
In-So Kweon
SSL
AAML
22
117
0
13 Jul 2020
Measuring Robustness to Natural Distribution Shifts in Image
  Classification
Measuring Robustness to Natural Distribution Shifts in Image Classification
Rohan Taori
Achal Dave
Vaishaal Shankar
Nicholas Carlini
Benjamin Recht
Ludwig Schmidt
OOD
33
536
0
01 Jul 2020
Counterexample-Guided Learning of Monotonic Neural Networks
Counterexample-Guided Learning of Monotonic Neural Networks
Aishwarya Sivaraman
G. Farnadi
T. Millstein
Mathias Niepert
24
50
0
16 Jun 2020
On the Loss Landscape of Adversarial Training: Identifying Challenges
  and How to Overcome Them
On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them
Chen Liu
Mathieu Salzmann
Tao R. Lin
Ryota Tomioka
Sabine Süsstrunk
AAML
24
81
0
15 Jun 2020
Defensive Approximation: Securing CNNs using Approximate Computing
Defensive Approximation: Securing CNNs using Approximate Computing
Amira Guesmi
Ihsen Alouani
Khaled N. Khasawneh
M. Baklouti
T. Frikha
Mohamed Abid
Nael B. Abu-Ghazaleh
AAML
19
37
0
13 Jun 2020
D-square-B: Deep Distribution Bound for Natural-looking Adversarial
  Attack
D-square-B: Deep Distribution Bound for Natural-looking Adversarial Attack
Qiuling Xu
Guanhong Tao
Xiangyu Zhang
AAML
22
2
0
12 Jun 2020
Calibrated Surrogate Losses for Adversarially Robust Classification
Calibrated Surrogate Losses for Adversarially Robust Classification
Han Bao
Clayton Scott
Masashi Sugiyama
29
45
0
28 May 2020
PatchGuard: A Provably Robust Defense against Adversarial Patches via
  Small Receptive Fields and Masking
PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking
Chong Xiang
A. Bhagoji
Vikash Sehwag
Prateek Mittal
AAML
30
29
0
17 May 2020
Encryption Inspired Adversarial Defense for Visual Classification
Encryption Inspired Adversarial Defense for Visual Classification
Maungmaung Aprilpyone
Hitoshi Kiya
18
32
0
16 May 2020
Towards Understanding the Adversarial Vulnerability of Skeleton-based
  Action Recognition
Towards Understanding the Adversarial Vulnerability of Skeleton-based Action Recognition
Tianhang Zheng
Sheng Liu
Changyou Chen
Junsong Yuan
Baochun Li
K. Ren
AAML
21
17
0
14 May 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
46
298
0
08 May 2020
Robustness Certification of Generative Models
Robustness Certification of Generative Models
M. Mirman
Timon Gehr
Martin Vechev
AAML
43
22
0
30 Apr 2020
Provably robust deep generative models
Provably robust deep generative models
Filipe Condessa
Zico Kolter
AAML
OOD
6
5
0
22 Apr 2020
Certifying Joint Adversarial Robustness for Model Ensembles
Certifying Joint Adversarial Robustness for Model Ensembles
M. Jonas
David Evans
AAML
21
2
0
21 Apr 2020
Single-step Adversarial training with Dropout Scheduling
Single-step Adversarial training with Dropout Scheduling
S. VivekB.
R. Venkatesh Babu
OOD
AAML
18
71
0
18 Apr 2020
When the Guard failed the Droid: A case study of Android malware
When the Guard failed the Droid: A case study of Android malware
Harel Berger
Chen Hajaj
A. Dvir
AAML
30
7
0
31 Mar 2020
Safety-Aware Hardening of 3D Object Detection Neural Network Systems
Safety-Aware Hardening of 3D Object Detection Neural Network Systems
Chih-Hong Cheng
3DPC
24
12
0
25 Mar 2020
Diversity can be Transferred: Output Diversification for White- and
  Black-box Attacks
Diversity can be Transferred: Output Diversification for White- and Black-box Attacks
Y. Tashiro
Yang Song
Stefano Ermon
AAML
14
13
0
15 Mar 2020
When are Non-Parametric Methods Robust?
When are Non-Parametric Methods Robust?
Robi Bhattacharjee
Kamalika Chaudhuri
AAML
42
28
0
13 Mar 2020
Model Assertions for Monitoring and Improving ML Models
Model Assertions for Monitoring and Improving ML Models
Daniel Kang
Deepti Raghavan
Peter Bailis
Matei A. Zaharia
8
58
0
03 Mar 2020
Analyzing Accuracy Loss in Randomized Smoothing Defenses
Analyzing Accuracy Loss in Randomized Smoothing Defenses
Yue Gao
Harrison Rosenberg
Kassem Fawaz
S. Jha
Justin Hsu
AAML
24
6
0
03 Mar 2020
Previous
12345
Next