ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1909.10147
  4. Cited By
Robust Local Features for Improving the Generalization of Adversarial
  Training

Robust Local Features for Improving the Generalization of Adversarial Training

23 September 2019
Chuanbiao Song
Kun He
Jiadong Lin
Liwei Wang
John E. Hopcroft
    OOD
    AAML
ArXivPDFHTML

Papers citing "Robust Local Features for Improving the Generalization of Adversarial Training"

24 / 24 papers shown
Title
Nesterov Accelerated Gradient and Scale Invariance for Adversarial
  Attacks
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
Jiadong Lin
Chuanbiao Song
Kun He
Liwei Wang
John E. Hopcroft
AAML
68
569
0
17 Aug 2019
Adversarially Robust Generalization Just Requires More Unlabeled Data
Adversarially Robust Generalization Just Requires More Unlabeled Data
Runtian Zhai
Tianle Cai
Di He
Chen Dan
Kun He
John E. Hopcroft
Liwei Wang
66
156
0
03 Jun 2019
Interpreting Adversarially Trained Convolutional Neural Networks
Interpreting Adversarially Trained Convolutional Neural Networks
Tianyuan Zhang
Zhanxing Zhu
AAML
GAN
FAtt
97
161
0
23 May 2019
A Direct Approach to Robust Deep Learning Using Adversarial Networks
A Direct Approach to Robust Deep Learning Using Adversarial Networks
Huaxia Wang
Chun-Nam Yu
GAN
AAML
OOD
71
77
0
23 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
89
1,839
0
06 May 2019
NATTACK: Learning the Distributions of Adversarial Examples for an
  Improved Black-Box Attack on Deep Neural Networks
NATTACK: Learning the Distributions of Adversarial Examples for an Improved Black-Box Attack on Deep Neural Networks
Yandong Li
Lijun Li
Liqiang Wang
Tong Zhang
Boqing Gong
AAML
54
245
0
01 May 2019
On the Sensitivity of Adversarial Robustness to Input Data Distributions
On the Sensitivity of Adversarial Robustness to Input Data Distributions
G. Ding
Kry Yik-Chau Lui
Xiaomeng Jin
Luyu Wang
Ruitong Huang
OOD
52
60
0
22 Feb 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
132
2,549
0
24 Jan 2019
The Limitations of Adversarial Training and the Blind-Spot Attack
The Limitations of Adversarial Training and the Blind-Spot Attack
Huan Zhang
Hongge Chen
Zhao Song
Duane S. Boning
Inderjit S. Dhillon
Cho-Jui Hsieh
AAML
53
145
0
15 Jan 2019
ImageNet-trained CNNs are biased towards texture; increasing shape bias
  improves accuracy and robustness
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
Robert Geirhos
Patricia Rubisch
Claudio Michaelis
Matthias Bethge
Felix Wichmann
Wieland Brendel
100
2,668
0
29 Nov 2018
Improving the Generalization of Adversarial Training with Domain
  Adaptation
Improving the Generalization of Adversarial Training with Domain Adaptation
Chuanbiao Song
Kun He
Liwei Wang
John E. Hopcroft
AAML
OOD
94
131
0
01 Oct 2018
Adversarially Robust Generalization Requires More Data
Adversarially Robust Generalization Requires More Data
Ludwig Schmidt
Shibani Santurkar
Dimitris Tsipras
Kunal Talwar
Aleksander Madry
OOD
AAML
134
790
0
30 Apr 2018
Stochastic Activation Pruning for Robust Adversarial Defense
Stochastic Activation Pruning for Robust Adversarial Defense
Guneet Singh Dhillon
Kamyar Azizzadenesheli
Zachary Chase Lipton
Jeremy Bernstein
Jean Kossaifi
Aran Khanna
Anima Anandkumar
AAML
73
547
0
05 Mar 2018
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
J. Uesato
Brendan O'Donoghue
Aaron van den Oord
Pushmeet Kohli
AAML
150
604
0
15 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
219
3,185
0
01 Feb 2018
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
304
12,063
0
19 Jun 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
201
2,221
0
12 Jun 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
121
1,857
0
20 May 2017
Delving into Transferable Adversarial Examples and Black-box Attacks
Delving into Transferable Adversarial Examples and Black-box Attacks
Yanpei Liu
Xinyun Chen
Chang-rui Liu
D. Song
AAML
140
1,737
0
08 Nov 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
261
8,552
0
16 Aug 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
337
7,984
0
23 May 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAU
AAML
75
3,677
0
08 Feb 2016
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
277
19,049
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
270
14,918
1
21 Dec 2013
1