ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1511.06385
  4. Cited By
A Unified Gradient Regularization Family for Adversarial Examples

A Unified Gradient Regularization Family for Adversarial Examples

19 November 2015
Chunchuan Lyu
Kaizhu Huang
Hai-Ning Liang
    AAML
ArXivPDFHTML

Papers citing "A Unified Gradient Regularization Family for Adversarial Examples"

38 / 38 papers shown
Title
New Perspectives on Regularization and Computation in Optimal Transport-Based Distributionally Robust Optimization
New Perspectives on Regularization and Computation in Optimal Transport-Based Distributionally Robust Optimization
Soroosh Shafieezadeh-Abadeh
Liviu Aolaritei
Florian Dorfler
Daniel Kuhn
66
20
0
31 Dec 2024
A Learning Paradigm for Interpretable Gradients
A Learning Paradigm for Interpretable Gradients
Felipe Figueroa
Hanwei Zhang
R. Sicre
Yannis Avrithis
Stéphane Ayache
FAtt
28
0
0
23 Apr 2024
Deep neural networks for choice analysis: Enhancing behavioral
  regularity with gradient regularization
Deep neural networks for choice analysis: Enhancing behavioral regularity with gradient regularization
Siqi Feng
Rui Yao
Stephane Hess
Ricardo A. Daziano
Timothy Brathwaite
Joan Walker
Shenhao Wang
30
1
0
23 Apr 2024
ML-On-Rails: Safeguarding Machine Learning Models in Software Systems A
  Case Study
ML-On-Rails: Safeguarding Machine Learning Models in Software Systems A Case Study
Hala Abdelkader
Mohamed Abdelrazek
Scott Barnett
Jean-Guy Schneider
Priya Rani
Rajesh Vasa
40
3
0
12 Jan 2024
Doubly Perturbed Task Free Continual Learning
Doubly Perturbed Task Free Continual Learning
Byung Hyun Lee
Min-hwan Oh
Se Young Chun
32
3
0
20 Dec 2023
Training Image Derivatives: Increased Accuracy and Universal Robustness
Training Image Derivatives: Increased Accuracy and Universal Robustness
V. Avrutskiy
51
0
0
21 Oct 2023
Adversarial Amendment is the Only Force Capable of Transforming an Enemy
  into a Friend
Adversarial Amendment is the Only Force Capable of Transforming an Enemy into a Friend
Chong Yu
Tao Chen
Zhongxue Gan
AAML
23
1
0
18 May 2023
Threats, Vulnerabilities, and Controls of Machine Learning Based
  Systems: A Survey and Taxonomy
Threats, Vulnerabilities, and Controls of Machine Learning Based Systems: A Survey and Taxonomy
Yusuke Kawamoto
Kazumasa Miyake
K. Konishi
Y. Oiwa
29
4
0
18 Jan 2023
On adversarial robustness and the use of Wasserstein ascent-descent
  dynamics to enforce it
On adversarial robustness and the use of Wasserstein ascent-descent dynamics to enforce it
Camilo A. Garcia Trillos
Nicolas García Trillos
39
5
0
09 Jan 2023
Deep Learning for Brain Age Estimation: A Systematic Review
Deep Learning for Brain Age Estimation: A Systematic Review
Md. Iftekhar Tanveer
M. A. Ganaie
I. Beheshti
Tripti Goel
Nehal Ahmad
Kuan-Ting Lai
Kaizhu Huang
Yudong Zhang
Javier Del Ser
Chin-Teng Lin
38
86
0
07 Dec 2022
A Survey of Robust Adversarial Training in Pattern Recognition:
  Fundamental, Theory, and Methodologies
A Survey of Robust Adversarial Training in Pattern Recognition: Fundamental, Theory, and Methodologies
Zhuang Qian
Kaizhu Huang
Qiufeng Wang
Xu-Yao Zhang
OOD
AAML
ObjD
54
72
0
26 Mar 2022
The Geometry of Adversarial Training in Binary Classification
The Geometry of Adversarial Training in Binary Classification
Leon Bungert
Nicolas García Trillos
Ryan W. Murray
AAML
34
23
0
26 Nov 2021
Generative Dynamic Patch Attack
Generative Dynamic Patch Attack
Xiang Li
Shihao Ji
AAML
30
22
0
08 Nov 2021
On the regularized risk of distributionally robust learning over deep
  neural networks
On the regularized risk of distributionally robust learning over deep neural networks
Camilo A. Garcia Trillos
Nicolas García Trillos
OOD
45
10
0
13 Sep 2021
Evolving Architectures with Gradient Misalignment toward Low Adversarial
  Transferability
Evolving Architectures with Gradient Misalignment toward Low Adversarial Transferability
K. Operiano
W. Pora
H. Iba
Hiroshi Kera
AAML
39
1
0
13 Sep 2021
Why Adversarial Reprogramming Works, When It Fails, and How to Tell the
  Difference
Why Adversarial Reprogramming Works, When It Fails, and How to Tell the Difference
Yang Zheng
Xiaoyi Feng
Zhaoqiang Xia
Xiaoyue Jiang
Ambra Demontis
Maura Pintor
Battista Biggio
Fabio Roli
AAML
30
22
0
26 Aug 2021
We Can Always Catch You: Detecting Adversarial Patched Objects WITH or
  WITHOUT Signature
We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature
Binxiu Liang
Jiachun Li
Jianjun Huang
AAML
33
12
0
09 Jun 2021
DL-Reg: A Deep Learning Regularization Technique using Linear Regression
DL-Reg: A Deep Learning Regularization Technique using Linear Regression
Maryam Dialameh
A. Hamzeh
Hossein Rahmani
26
3
0
31 Oct 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
43
62
0
11 Sep 2020
Quantifying the Preferential Direction of the Model Gradient in
  Adversarial Training With Projected Gradient Descent
Quantifying the Preferential Direction of the Model Gradient in Adversarial Training With Projected Gradient Descent
Ricardo Bigolin Lanfredi
Joyce D. Schroeder
Tolga Tasdizen
27
11
0
10 Sep 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
29
73
0
07 Aug 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient
  Shaping
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
28
136
0
26 Feb 2020
Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color
  Space
Adversarial Perturbations Prevail in the Y-Channel of the YCbCr Color Space
Camilo Pestana
Naveed Akhtar
Wei Liu
D. Glance
Ajmal Mian
AAML
29
10
0
25 Feb 2020
Securing Connected & Autonomous Vehicles: Challenges Posed by
  Adversarial Machine Learning and The Way Forward
Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward
A. Qayyum
Muhammad Usama
Junaid Qadir
Ala I. Al-Fuqaha
AAML
27
187
0
29 May 2019
Scaleable input gradient regularization for adversarial robustness
Scaleable input gradient regularization for adversarial robustness
Chris Finlay
Adam M. Oberman
AAML
16
77
0
27 May 2019
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template
  Updating
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating
Giulio Lovisotto
Simon Eberz
Ivan Martinovic
AAML
21
35
0
22 May 2019
Daedalus: Breaking Non-Maximum Suppression in Object Detection via
  Adversarial Examples
Daedalus: Breaking Non-Maximum Suppression in Object Detection via Adversarial Examples
Derui Wang
Chaoran Li
S. Wen
Qing-Long Han
Surya Nepal
Xiangyu Zhang
Yang Xiang
AAML
30
40
0
06 Feb 2019
Generative Adversarial Classifier for Handwriting Characters
  Super-Resolution
Generative Adversarial Classifier for Handwriting Characters Super-Resolution
Zhuang Qian
Kaizhu Huang
Qiufeng Wang
Jimin Xiao
Rui Zhang
GAN
25
26
0
18 Jan 2019
Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks
Defense-VAE: A Fast and Accurate Defense against Adversarial Attacks
Xiang Li
Shihao Ji
AAML
27
26
0
17 Dec 2018
Robustness via curvature regularization, and vice versa
Robustness via curvature regularization, and vice versa
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
J. Uesato
P. Frossard
AAML
29
318
0
23 Nov 2018
A Kernel Perspective for Regularizing Deep Neural Networks
A Kernel Perspective for Regularizing Deep Neural Networks
A. Bietti
Grégoire Mialon
Dexiong Chen
Julien Mairal
16
15
0
30 Sep 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILM
AAML
19
11
0
08 Sep 2018
Bidirectional Learning for Robust Neural Networks
Bidirectional Learning for Robust Neural Networks
S. Pontes-Filho
Marcus Liwicki
20
9
0
21 May 2018
Defending against Adversarial Attack towards Deep Neural Networks via
  Collaborative Multi-task Training
Defending against Adversarial Attack towards Deep Neural Networks via Collaborative Multi-task Training
Derui Wang
Chaoran Li
S. Wen
Surya Nepal
Yang Xiang
AAML
41
29
0
14 Mar 2018
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
40
1,390
0
08 Dec 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
A. Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
89
11,872
0
19 Jun 2017
Blocking Transferability of Adversarial Examples in Black-Box Learning
  Systems
Blocking Transferability of Adversarial Examples in Black-Box Learning Systems
Hossein Hosseini
Yize Chen
Sreeram Kannan
Baosen Zhang
Radha Poovendran
AAML
30
106
0
13 Mar 2017
Improving neural networks by preventing co-adaptation of feature
  detectors
Improving neural networks by preventing co-adaptation of feature detectors
Geoffrey E. Hinton
Nitish Srivastava
A. Krizhevsky
Ilya Sutskever
Ruslan Salakhutdinov
VLM
266
7,639
0
03 Jul 2012
1