ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.00740
  4. Cited By
Disentangling Adversarial Robustness and Generalization
v1v2 (latest)

Disentangling Adversarial Robustness and Generalization

3 December 2018
David Stutz
Matthias Hein
Bernt Schiele
    AAMLOOD
ArXiv (abs)PDFHTML

Papers citing "Disentangling Adversarial Robustness and Generalization"

30 / 180 papers shown
Title
Adversarial Training against Location-Optimized Adversarial Patches
Adversarial Training against Location-Optimized Adversarial Patches
Sukrut Rao
David Stutz
Bernt Schiele
AAML
84
92
0
05 May 2020
Robust Generative Adversarial Network
Robust Generative Adversarial Network
Shufei Zhang
Zhuang Qian
Kaizhu Huang
Jimin Xiao
Yuan He
64
9
0
28 Apr 2020
Towards Feature Space Adversarial Attack
Towards Feature Space Adversarial Attack
Qiuling Xu
Guanhong Tao
Shuyang Cheng
Xinming Zhang
GANAAML
66
25
0
26 Apr 2020
Adversarial Training for Large Neural Language Models
Adversarial Training for Large Neural Language Models
Xiaodong Liu
Hao Cheng
Pengcheng He
Weizhu Chen
Yu Wang
Hoifung Poon
Jianfeng Gao
AAML
83
186
0
20 Apr 2020
Learning to Learn Single Domain Generalization
Learning to Learn Single Domain Generalization
Fengchun Qiao
Long Zhao
Xi Peng
OOD
132
444
0
30 Mar 2020
Adversarial Robustness on In- and Out-Distribution Improves
  Explainability
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
191
102
0
20 Mar 2020
Towards Face Encryption by Generating Adversarial Identity Masks
Towards Face Encryption by Generating Adversarial Identity Masks
Xiao Yang
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
YueFeng Chen
H. Xue
AAMLPICV
151
75
0
15 Mar 2020
No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT
  Scans by Augmenting with Adversarial Attacks
No Surprises: Training Robust Lung Nodule Detection for Low-Dose CT Scans by Augmenting with Adversarial Attacks
Siqi Liu
A. Setio
Florin-Cristian Ghesu
Eli Gibson
Sasa Grbic
Bogdan Georgescu
Dorin Comaniciu
AAMLOOD
117
40
0
08 Mar 2020
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve
  Adversarial Robustness
Learn2Perturb: an End-to-end Feature Perturbation Learning to Improve Adversarial Robustness
Ahmadreza Jeddi
M. Shafiee
Michelle Karg
C. Scharfenberger
A. Wong
OODAAML
126
67
0
02 Mar 2020
The Curious Case of Adversarially Robust Models: More Data Can Help,
  Double Descend, or Hurt Generalization
The Curious Case of Adversarially Robust Models: More Data Can Help, Double Descend, or Hurt Generalization
Yifei Min
Lin Chen
Amin Karbasi
AAML
103
69
0
25 Feb 2020
Gödel's Sentence Is An Adversarial Example But Unsolvable
Gödel's Sentence Is An Adversarial Example But Unsolvable
Xiaodong Qi
Lansheng Han
AAML
56
0
0
25 Feb 2020
Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color
  Space
Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space
Camilo Pestana
Naveed Akhtar
Wei Liu
D. Glance
Ajmal Mian
AAML
57
10
0
25 Feb 2020
More Data Can Expand the Generalization Gap Between Adversarially Robust
  and Standard Models
More Data Can Expand the Generalization Gap Between Adversarially Robust and Standard Models
Lin Chen
Yifei Min
Mingrui Zhang
Amin Karbasi
OOD
84
64
0
11 Feb 2020
Segmentations-Leak: Membership Inference Attacks and Defenses in
  Semantic Image Segmentation
Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation
Yang He
Shadi Rahimian
Bernt Schiele
Mario Fritz
MIACV
92
52
0
20 Dec 2019
Explaining Classifiers using Adversarial Perturbations on the Perceptual
  Ball
Explaining Classifiers using Adversarial Perturbations on the Perceptual Ball
Andrew Elliott
Stephen Law
Chris Russell
AAML
52
4
0
19 Dec 2019
On-manifold Adversarial Data Augmentation Improves Uncertainty
  Calibration
On-manifold Adversarial Data Augmentation Improves Uncertainty Calibration
Kanil Patel
William H. Beluch
Dan Zhang
Michael Pfeiffer
Bin Yang
UQCV
125
30
0
16 Dec 2019
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Deep Neural Network Fingerprinting by Conferrable Adversarial Examples
Nils Lukas
Yuxuan Zhang
Florian Kerschbaum
MLAUFedMLAAML
112
146
0
02 Dec 2019
Adversarial Examples Improve Image Recognition
Adversarial Examples Improve Image Recognition
Cihang Xie
Mingxing Tan
Boqing Gong
Jiang Wang
Alan Yuille
Quoc V. Le
AAML
165
567
0
21 Nov 2019
Confidence-Calibrated Adversarial Training: Generalizing to Unseen
  Attacks
Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks
David Stutz
Matthias Hein
Bernt Schiele
AAML
89
5
0
14 Oct 2019
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Han Xu
Yao Ma
Haochen Liu
Debayan Deb
Hui Liu
Jiliang Tang
Anil K. Jain
AAML
79
680
0
17 Sep 2019
Random Directional Attack for Fooling Deep Neural Networks
Random Directional Attack for Fooling Deep Neural Networks
Wenjian Luo
Chenwang Wu
Nan Zhou
Li Ni
AAML
19
4
0
06 Aug 2019
Adversarial Explanations for Understanding Image Classification
  Decisions and Improved Neural Network Robustness
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness
Walt Woods
Jack H Chen
C. Teuscher
AAML
66
46
0
07 Jun 2019
Understanding Adversarial Behavior of DNNs by Disentangling Non-Robust
  and Robust Components in Performance Metric
Understanding Adversarial Behavior of DNNs by Disentangling Non-Robust and Robust Components in Performance Metric
Yujun Shi
B. Liao
Guangyong Chen
Yun-Hai Liu
Ming-Ming Cheng
Jiashi Feng
AAML
20
2
0
06 Jun 2019
Meta Dropout: Learning to Perturb Features for Generalization
Meta Dropout: Learning to Perturb Features for Generalization
Haebeom Lee
Taewook Nam
Eunho Yang
Sung Ju Hwang
OOD
68
3
0
30 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
104
1,846
0
06 May 2019
Bridging Adversarial Robustness and Gradient Interpretability
Bridging Adversarial Robustness and Gradient Interpretability
Beomsu Kim
Junghoon Seo
Taegyun Jeon
AAML
84
40
0
27 Mar 2019
Robustness of Generalized Learning Vector Quantization Models against
  Adversarial Attacks
Robustness of Generalized Learning Vector Quantization Models against Adversarial Attacks
S. Saralajew
Lars Holdijk
Maike Rees
T. Villmann
OOD
49
19
0
01 Feb 2019
Face Hallucination Revisited: An Exploratory Study on Dataset Bias
Face Hallucination Revisited: An Exploratory Study on Dataset Bias
Klemen Grm
Martin Pernuš
Leo Cluzel
Walter J. Scheirer
Simon Dobrišek
Vitomir Štruc
CVBM
114
11
0
21 Dec 2018
Why ReLU networks yield high-confidence predictions far away from the
  training data and how to mitigate the problem
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
187
559
0
13 Dec 2018
Generating Natural Adversarial Examples
Generating Natural Adversarial Examples
Zhengli Zhao
Dheeru Dua
Sameer Singh
GANAAML
194
601
0
31 Oct 2017
Previous
1234