ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2207.13036
  4. Cited By
Jacobian Norm with Selective Input Gradient Regularization for Improved
  and Interpretable Adversarial Defense

Jacobian Norm with Selective Input Gradient Regularization for Improved and Interpretable Adversarial Defense

9 July 2022
Deyin Liu
Lin Wu
Haifeng Zhao
F. Boussaïd
Bennamoun
Xianghua Xie
    AAML
ArXivPDFHTML

Papers citing "Jacobian Norm with Selective Input Gradient Regularization for Improved and Interpretable Adversarial Defense"

24 / 24 papers shown
Title
Policy Distillation with Selective Input Gradient Regularization for
  Efficient Interpretability
Policy Distillation with Selective Input Gradient Regularization for Efficient Interpretability
Jinwei Xing
Takashi Nagata
Xinyun Zou
Emre Neftci
J. Krichmar
AAML
50
4
0
18 May 2022
Attack to Fool and Explain Deep Networks
Attack to Fool and Explain Deep Networks
Naveed Akhtar
M. Jalwana
Bennamoun
Ajmal Mian
AAML
74
33
0
20 Jun 2021
Robust Learning with Jacobian Regularization
Robust Learning with Jacobian Regularization
Judy Hoffman
Daniel A. Roberts
Sho Yaida
OOD
AAML
54
167
0
07 Aug 2019
Defense Against Adversarial Attacks Using Feature Scattering-based
  Adversarial Training
Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
Haichao Zhang
Jianyu Wang
AAML
64
230
0
24 Jul 2019
On the Connection Between Adversarial Robustness and Saliency Map
  Interpretability
On the Connection Between Adversarial Robustness and Saliency Map Interpretability
Christian Etmann
Sebastian Lunz
Peter Maass
Carola-Bibiane Schönlieb
AAML
FAtt
58
162
0
10 May 2019
Adversarial Examples Are Not Bugs, They Are Features
Adversarial Examples Are Not Bugs, They Are Features
Andrew Ilyas
Shibani Santurkar
Dimitris Tsipras
Logan Engstrom
Brandon Tran
Aleksander Madry
SILM
89
1,838
0
06 May 2019
Adversarial Training for Free!
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
130
1,249
0
29 Apr 2019
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural
  Network Robustness against Adversarial Attack
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack
Adnan Siraj Rakin
Zhezhi He
Deliang Fan
AAML
65
292
0
22 Nov 2018
Robustness May Be at Odds with Accuracy
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
102
1,781
0
30 May 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILM
AAML
96
934
0
09 Feb 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
224
3,186
0
01 Feb 2018
Towards Robust Neural Networks via Random Self-ensemble
Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu
Minhao Cheng
Huan Zhang
Cho-Jui Hsieh
FedML
AAML
90
421
0
02 Dec 2017
Improving the Adversarial Robustness and Interpretability of Deep Neural
  Networks by Regularizing their Input Gradients
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
147
682
0
26 Nov 2017
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural
  Networks without Training Substitute Models
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
80
1,879
0
14 Aug 2017
Ensemble Adversarial Training: Attacks and Defenses
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
177
2,725
0
19 May 2017
The Space of Transferable Adversarial Examples
The Space of Transferable Adversarial Examples
Florian Tramèr
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
SILM
90
557
0
11 Apr 2017
Delving into Transferable Adversarial Examples and Black-box Attacks
Delving into Transferable Adversarial Examples and Black-box Attacks
Yanpei Liu
Xinyun Chen
Chang-rui Liu
D. Song
AAML
140
1,737
0
08 Nov 2016
Gradients of Counterfactuals
Gradients of Counterfactuals
Mukund Sundararajan
Ankur Taly
Qiqi Yan
FAtt
56
104
0
08 Nov 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
264
8,555
0
16 Aug 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
540
5,897
0
08 Jul 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAU
AAML
75
3,678
0
08 Feb 2016
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
270
14,927
1
21 Dec 2013
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,295
0
20 Dec 2013
How to Explain Individual Classification Decisions
How to Explain Individual Classification Decisions
D. Baehrens
T. Schroeter
Stefan Harmeling
M. Kawanabe
K. Hansen
K. Müller
FAtt
132
1,104
0
06 Dec 2009
1