Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1705.08475
Cited By
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
23 May 2017
Matthias Hein
Maksym Andriushchenko
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation"
50 / 119 papers shown
Title
Relating Adversarially Robust Generalization to Flat Minima
David Stutz
Matthias Hein
Bernt Schiele
OOD
32
65
0
09 Apr 2021
A Review of Formal Methods applied to Machine Learning
Caterina Urban
Antoine Miné
39
55
0
06 Apr 2021
A Unified Game-Theoretic Interpretation of Adversarial Robustness
Jie Ren
Die Zhang
Yisen Wang
Lu Chen
Zhanpeng Zhou
...
Xu Cheng
Xin Wang
Meng Zhou
Jie Shi
Quanshi Zhang
AAML
72
22
0
12 Mar 2021
Non-Singular Adversarial Robustness of Neural Networks
Yu-Lin Tsai
Chia-Yi Hsu
Chia-Mu Yu
Pin-Yu Chen
AAML
OOD
9
5
0
23 Feb 2021
Fast Training of Provably Robust Neural Networks by SingleProp
Akhilan Boopathy
Tsui-Wei Weng
Sijia Liu
Pin-Yu Chen
Gaoyuan Zhang
Luca Daniel
AAML
11
7
0
01 Feb 2021
Increasing the Confidence of Deep Neural Networks by Coverage Analysis
Giulio Rossolini
Alessandro Biondi
Giorgio Buttazzo
AAML
26
13
0
28 Jan 2021
ROBY: Evaluating the Robustness of a Deep Model by its Decision Boundaries
Jinyin Chen
Zhen Wang
Haibin Zheng
Jun Xiao
Zhaoyan Ming
AAML
13
5
0
18 Dec 2020
Feature Space Singularity for Out-of-Distribution Detection
Haiwen Huang
Zhihan Li
Lulu Wang
Sishuo Chen
Bin Dong
Xinyu Zhou
OODD
22
65
0
30 Nov 2020
A Study on the Uncertainty of Convolutional Layers in Deep Neural Networks
Hao Shen
Sihong Chen
Ran Wang
30
5
0
27 Nov 2020
Adversarially Robust Classification based on GLRT
Bhagyashree Puranik
Upamanyu Madhow
Ramtin Pedarsani
VLM
AAML
18
4
0
16 Nov 2020
Characterizing and Taming Model Instability Across Edge Devices
Eyal Cidon
Evgenya Pergament
Zain Asgar
Asaf Cidon
Sachin Katti
14
7
0
18 Oct 2020
Input Hessian Regularization of Neural Networks
Waleed Mustafa
Robert A. Vandermeulen
Marius Kloft
AAML
19
12
0
14 Sep 2020
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Bo-wen Li
AAML
30
128
0
09 Sep 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
25
73
0
07 Aug 2020
A Le Cam Type Bound for Adversarial Learning and Applications
Qiuling Xu
Kevin Bello
Jean Honorio
AAML
16
1
0
01 Jul 2020
Counterexample-Guided Learning of Monotonic Neural Networks
Aishwarya Sivaraman
G. Farnadi
T. Millstein
Guy Van den Broeck
24
50
0
16 Jun 2020
Calibrated Surrogate Losses for Adversarially Robust Classification
Han Bao
Clayton Scott
Masashi Sugiyama
27
45
0
28 May 2020
Training robust neural networks using Lipschitz bounds
Patricia Pauli
Anne Koch
J. Berberich
Paul Kohler
Frank Allgöwer
19
156
0
06 May 2020
Verification of Deep Convolutional Neural Networks Using ImageStars
Hoang-Dung Tran
Stanley Bak
Weiming Xiang
Taylor T. Johnson
AAML
20
127
0
12 Apr 2020
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
75
98
0
20 Mar 2020
When are Non-Parametric Methods Robust?
Robi Bhattacharjee
Kamalika Chaudhuri
AAML
36
28
0
13 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OOD
AAML
58
63
0
02 Mar 2020
Attacks Which Do Not Kill Training Make Adversarial Learning Stronger
Jingfeng Zhang
Xilie Xu
Bo Han
Gang Niu
Li-zhen Cui
Masashi Sugiyama
Mohan S. Kankanhalli
AAML
27
396
0
26 Feb 2020
Robustness Verification for Transformers
Zhouxing Shi
Huan Zhang
Kai-Wei Chang
Minlie Huang
Cho-Jui Hsieh
AAML
19
104
0
16 Feb 2020
Efficient Adversarial Training with Transferable Adversarial Examples
Haizhong Zheng
Ziqi Zhang
Juncheng Gu
Honglak Lee
A. Prakash
AAML
22
108
0
27 Dec 2019
Benchmarking Adversarial Robustness
Yinpeng Dong
Qi-An Fu
Xiao Yang
Tianyu Pang
Hang Su
Zihao Xiao
Jun Zhu
AAML
28
36
0
26 Dec 2019
There is Limited Correlation between Coverage and Robustness for Deep Neural Networks
Yizhen Dong
Peixin Zhang
Jingyi Wang
Shuang Liu
Jun Sun
Jianye Hao
Xinyu Wang
Li Wang
J. Dong
Ting Dai
OOD
AAML
19
32
0
14 Nov 2019
Enhancing Certifiable Robustness via a Deep Model Ensemble
Huan Zhang
Minhao Cheng
Cho-Jui Hsieh
33
9
0
31 Oct 2019
Probabilistic Verification and Reachability Analysis of Neural Networks via Semidefinite Programming
Mahyar Fazlyab
M. Morari
George J. Pappas
AAML
35
41
0
09 Oct 2019
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
MQ
24
18
0
27 Sep 2019
Towards neural networks that provably know when they don't know
Alexander Meinke
Matthias Hein
OODD
24
139
0
26 Sep 2019
Minimally distorted Adversarial Examples with a Fast Adaptive Boundary Attack
Francesco Croce
Matthias Hein
AAML
40
474
0
03 Jul 2019
Robustness Guarantees for Deep Neural Networks on Videos
Min Wu
Marta Z. Kwiatkowska
AAML
17
22
0
28 Jun 2019
Certifiable Robustness and Robust Training for Graph Convolutional Networks
Daniel Zügner
Stephan Günnemann
OffRL
31
160
0
28 Jun 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
Yangqiu Song
AAML
25
35
0
09 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
20
61
0
08 Jun 2019
Robustness for Non-Parametric Classification: A Generic Attack and Defense
Yao-Yuan Yang
Cyrus Rashtchian
Yizhen Wang
Kamalika Chaudhuri
SILM
AAML
34
42
0
07 Jun 2019
Controlling Neural Level Sets
Matan Atzmon
Niv Haim
Lior Yariv
Ofer Israelov
Haggai Maron
Y. Lipman
AI4CE
22
118
0
28 May 2019
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
16
77
0
27 May 2019
POPQORN: Quantifying Robustness of Recurrent Neural Networks
Ching-Yun Ko
Zhaoyang Lyu
Tsui-Wei Weng
Luca Daniel
Ngai Wong
Dahua Lin
AAML
17
75
0
17 May 2019
Virtual Mixup Training for Unsupervised Domain Adaptation
Xudong Mao
Yun Ma
Zhenguo Yang
Yangbin Chen
Qing Li
35
52
0
10 May 2019
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
35
1,227
0
29 Apr 2019
Scaling up the randomized gradient-free adversarial attack reveals overestimation of robustness using established attacks
Francesco Croce
Jonas Rauber
Matthias Hein
AAML
20
30
0
27 Mar 2019
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach
Tsui-Wei Weng
Pin-Yu Chen
Lam M. Nguyen
M. Squillante
Ivan V. Oseledets
Luca Daniel
AAML
13
30
0
18 Dec 2018
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
40
552
0
13 Dec 2018
MMA Training: Direct Input Space Margin Maximization through Adversarial Training
G. Ding
Yash Sharma
Kry Yik-Chau Lui
Ruitong Huang
AAML
16
270
0
06 Dec 2018
Prototype-based Neural Network Layers: Incorporating Vector Quantization
S. Saralajew
Lars Holdijk
Maike Rees
T. Villmann
MQ
17
15
0
04 Dec 2018
A randomized gradient-free attack on ReLU networks
Francesco Croce
Matthias Hein
AAML
34
21
0
28 Nov 2018
Robustness via curvature regularization, and vice versa
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
J. Uesato
P. Frossard
AAML
23
318
0
23 Nov 2018
Theoretical Analysis of Adversarial Learning: A Minimax Approach
Zhuozhuo Tu
Jingwei Zhang
Dacheng Tao
AAML
13
68
0
13 Nov 2018
Previous
1
2
3
Next