ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.11468
  4. Cited By
Scaleable input gradient regularization for adversarial robustness
v1v2 (latest)

Scaleable input gradient regularization for adversarial robustness

27 May 2019
Chris Finlay
Adam M. Oberman
    AAML
ArXiv (abs)PDFHTML

Papers citing "Scaleable input gradient regularization for adversarial robustness"

48 / 48 papers shown
Title
MIRACLE3D: Memory-efficient Integrated Robust Approach for Continual Learning on Point Clouds via Shape Model Construction
MIRACLE3D: Memory-efficient Integrated Robust Approach for Continual Learning on Point Clouds via Shape Model Construction
Hossein Resani
B. Nasihatkon
3DV
421
0
0
08 Oct 2024
Is Adversarial Training with Compressed Datasets Effective?
Is Adversarial Training with Compressed Datasets Effective?
Tong Chen
Raghavendra Selvan
AAML
146
0
0
08 Feb 2024
Nash Equilibria, Regularization and Computation in Optimal Transport-Based Distributionally Robust Optimization
Nash Equilibria, Regularization and Computation in Optimal Transport-Based Distributionally Robust Optimization
Soroosh Shafieezadeh-Abadeh
Liviu Aolaritei
Florian Dorfler
Daniel Kuhn
127
21
0
07 Mar 2023
Provably Robust Deep Learning via Adversarially Trained Smoothed
  Classifiers
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Hadi Salman
Greg Yang
Jungshian Li
Pengchuan Zhang
Huan Zhang
Ilya P. Razenshteyn
Sébastien Bubeck
AAML
78
551
0
09 Jun 2019
You Only Propagate Once: Accelerating Adversarial Training via Maximal
  Principle
You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle
Dinghuai Zhang
Tianyuan Zhang
Yiping Lu
Zhanxing Zhu
Bin Dong
AAML
113
361
0
02 May 2019
Adversarial Training for Free!
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
134
1,251
0
29 Apr 2019
The LogBarrier adversarial attack: making effective use of decision
  boundary information
The LogBarrier adversarial attack: making effective use of decision boundary information
Chris Finlay
Aram-Alexandre Pooladian
Adam M. Oberman
AAML
67
25
0
25 Mar 2019
L 1-norm double backpropagation adversarial defense
L 1-norm double backpropagation adversarial defense
Ismaïla Seck
Gaëlle Loosli
S. Canu
GANAAML
34
4
0
05 Mar 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELMAAML
98
906
0
18 Feb 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
158
2,051
0
08 Feb 2019
Feature Denoising for Improving Adversarial Robustness
Feature Denoising for Improving Adversarial Robustness
Cihang Xie
Yuxin Wu
Laurens van der Maaten
Alan Yuille
Kaiming He
110
912
0
09 Dec 2018
Robustness via curvature regularization, and vice versa
Robustness via curvature regularization, and vice versa
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
J. Uesato
P. Frossard
AAML
82
319
0
23 Nov 2018
Semidefinite relaxations for certifying robustness to adversarial
  examples
Semidefinite relaxations for certifying robustness to adversarial examples
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
100
439
0
02 Nov 2018
Training for Faster Adversarial Robustness Verification via Inducing
  ReLU Stability
Training for Faster Adversarial Robustness Verification via Inducing ReLU Stability
Kai Y. Xiao
Vincent Tjeng
Nur Muhammad (Mahi) Shafiullah
Aleksander Madry
AAMLOOD
41
201
0
09 Sep 2018
Limitations of the Lipschitz constant as a defense against adversarial
  examples
Limitations of the Lipschitz constant as a defense against adversarial examples
Todd P. Huster
C. Chiang
R. Chadha
AAML
47
84
0
25 Jul 2018
Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance
  Benchmark
Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance Benchmark
Cody Coleman
Daniel Kang
Deepak Narayanan
Luigi Nardi
Tian Zhao
Jian Zhang
Peter Bailis
K. Olukotun
Christopher Ré
Matei A. Zaharia
53
117
0
04 Jun 2018
Scaling provable adversarial defenses
Scaling provable adversarial defenses
Eric Wong
Frank R. Schmidt
J. H. Metzen
J. Zico Kolter
AAML
78
450
0
31 May 2018
Robustness May Be at Odds with Accuracy
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
106
1,782
0
30 May 2018
Adversarially Robust Training through Structured Gradient Regularization
Adversarially Robust Training through Structured Gradient Regularization
Kevin Roth
Aurelien Lucchi
Sebastian Nowozin
Thomas Hofmann
65
23
0
22 May 2018
Improving DNN Robustness to Adversarial Attacks using Jacobian
  Regularization
Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization
Daniel Jakubovitz
Raja Giryes
AAML
58
210
0
23 Mar 2018
Adversarial Logit Pairing
Adversarial Logit Pairing
Harini Kannan
Alexey Kurakin
Ian Goodfellow
AAML
98
628
0
16 Mar 2018
Sensitivity and Generalization in Neural Networks: an Empirical Study
Sensitivity and Generalization in Neural Networks: an Empirical Study
Roman Novak
Yasaman Bahri
Daniel A. Abolafia
Jeffrey Pennington
Jascha Narain Sohl-Dickstein
AAML
95
441
0
23 Feb 2018
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
Adversarial Risk and the Dangers of Evaluating Against Weak Attacks
J. Uesato
Brendan O'Donoghue
Aaron van den Oord
Pushmeet Kohli
AAML
160
605
0
15 Feb 2018
Lipschitz-Margin Training: Scalable Certification of Perturbation
  Invariance for Deep Neural Networks
Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks
Yusuke Tsuzuku
Issei Sato
Masashi Sugiyama
AAML
105
309
0
12 Feb 2018
First-order Adversarial Vulnerability of Neural Networks and Input
  Dimension
First-order Adversarial Vulnerability of Neural Networks and Input Dimension
Carl-Johann Simon-Gabriel
Yann Ollivier
Léon Bottou
Bernhard Schölkopf
David Lopez-Paz
AAML
79
48
0
05 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
243
3,194
0
01 Feb 2018
Evaluating the Robustness of Neural Networks: An Extreme Value Theory
  Approach
Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach
Tsui-Wei Weng
Huan Zhang
Pin-Yu Chen
Jinfeng Yi
D. Su
Yupeng Gao
Cho-Jui Hsieh
Luca Daniel
AAML
83
468
0
31 Jan 2018
Certified Defenses against Adversarial Examples
Certified Defenses against Adversarial Examples
Aditi Raghunathan
Jacob Steinhardt
Percy Liang
AAML
113
969
0
29 Jan 2018
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box
  Machine Learning Models
Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Wieland Brendel
Jonas Rauber
Matthias Bethge
AAML
69
1,350
0
12 Dec 2017
Improving the Adversarial Robustness and Interpretability of Deep Neural
  Networks by Regularizing their Input Gradients
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
154
683
0
26 Nov 2017
Provable defenses against adversarial examples via the convex outer
  adversarial polytope
Provable defenses against adversarial examples via the convex outer adversarial polytope
Eric Wong
J. Zico Kolter
AAML
128
1,504
0
02 Nov 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
160
2,159
0
21 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
315
12,131
0
19 Jun 2017
Gradient descent GAN optimization is locally stable
Gradient descent GAN optimization is locally stable
Vaishnavh Nagarajan
J. Zico Kolter
GAN
86
348
0
13 Jun 2017
Stabilizing Training of Generative Adversarial Networks through
  Regularization
Stabilizing Training of Generative Adversarial Networks through Regularization
Kevin Roth
Aurelien Lucchi
Sebastian Nowozin
Thomas Hofmann
GAN
64
446
0
25 May 2017
Formal Guarantees on the Robustness of a Classifier against Adversarial
  Manipulation
Formal Guarantees on the Robustness of a Classifier against Adversarial Manipulation
Matthias Hein
Maksym Andriushchenko
AAML
115
512
0
23 May 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
131
1,864
0
20 May 2017
Ensemble Adversarial Training: Attacks and Defenses
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
177
2,728
0
19 May 2017
Virtual Adversarial Training: A Regularization Method for Supervised and
  Semi-Supervised Learning
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
Takeru Miyato
S. Maeda
Masanori Koyama
S. Ishii
GAN
151
2,738
0
13 Apr 2017
Aggregated Residual Transformations for Deep Neural Networks
Aggregated Residual Transformations for Deep Neural Networks
Saining Xie
Ross B. Girshick
Piotr Dollár
Zhuowen Tu
Kaiming He
522
10,347
0
16 Nov 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OODAAML
266
8,579
0
16 Aug 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILMAAML
545
5,909
0
08 Jul 2016
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
354
10,192
0
16 Mar 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAUAAML
75
3,682
0
08 Feb 2016
The Limitations of Deep Learning in Adversarial Settings
The Limitations of Deep Learning in Adversarial Settings
Nicolas Papernot
Patrick McDaniel
S. Jha
Matt Fredrikson
Z. Berkay Celik
A. Swami
AAML
115
3,967
0
24 Nov 2015
A Unified Gradient Regularization Family for Adversarial Examples
A Unified Gradient Regularization Family for Adversarial Examples
Chunchuan Lyu
Kaizhu Huang
Hai-Ning Liang
AAML
68
209
0
19 Nov 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
282
19,107
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
282
14,961
1
21 Dec 2013
1