ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.06083
  4. Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks

Towards Deep Learning Models Resistant to Adversarial Attacks

19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
    SILM
    OOD
ArXivPDFHTML

Papers citing "Towards Deep Learning Models Resistant to Adversarial Attacks"

50 / 6,519 papers shown
Title
Adversarial Embedding: A robust and elusive Steganography and
  Watermarking technique
Adversarial Embedding: A robust and elusive Steganography and Watermarking technique
Salah Ghamizi
Maxime Cordy
Mike Papadakis
Yves Le Traon
WIGM
AAML
23
7
0
14 Nov 2019
There is Limited Correlation between Coverage and Robustness for Deep
  Neural Networks
There is Limited Correlation between Coverage and Robustness for Deep Neural Networks
Yizhen Dong
Peixin Zhang
Jingyi Wang
Shuang Liu
Jun Sun
Jianye Hao
Xinyu Wang
Li Wang
J. Dong
Ting Dai
OOD
AAML
21
32
0
14 Nov 2019
Adversarial Examples in Modern Machine Learning: A Review
Adversarial Examples in Modern Machine Learning: A Review
R. Wiyatno
Anqi Xu
Ousmane Amadou Dia
A. D. Berker
AAML
21
104
0
13 Nov 2019
Improving Robustness of Task Oriented Dialog Systems
Improving Robustness of Task Oriented Dialog Systems
Arash Einolghozati
Sonal Gupta
Mrinal Mohit
Rushin Shah
35
22
0
12 Nov 2019
On Robustness to Adversarial Examples and Polynomial Optimization
On Robustness to Adversarial Examples and Polynomial Optimization
Pranjal Awasthi
Abhratanu Dutta
Aravindan Vijayaraghavan
OOD
AAML
14
31
0
12 Nov 2019
Robust Design of Deep Neural Networks against Adversarial Attacks based
  on Lyapunov Theory
Robust Design of Deep Neural Networks against Adversarial Attacks based on Lyapunov Theory
Arash Rahnama
A. Nguyen
Edward Raff
AAML
24
20
0
12 Nov 2019
Learning From Brains How to Regularize Machines
Learning From Brains How to Regularize Machines
Zhe Li
Wieland Brendel
Edgar Y. Walker
Erick Cobos
Taliah Muhammad
Jacob Reimer
Matthias Bethge
Fabian H. Sinz
Xaq Pitkow
A. Tolias
OOD
AAML
29
62
0
11 Nov 2019
Self-training with Noisy Student improves ImageNet classification
Self-training with Noisy Student improves ImageNet classification
Qizhe Xie
Minh-Thang Luong
Eduard H. Hovy
Quoc V. Le
NoLa
88
2,368
0
11 Nov 2019
GraphDefense: Towards Robust Graph Convolutional Networks
GraphDefense: Towards Robust Graph Convolutional Networks
Xiaoyun Wang
Xuanqing Liu
Cho-Jui Hsieh
OOD
AAML
GNN
25
31
0
11 Nov 2019
Adaptive versus Standard Descent Methods and Robustness Against
  Adversarial Examples
Adaptive versus Standard Descent Methods and Robustness Against Adversarial Examples
Marc Khoury
AAML
23
1
0
09 Nov 2019
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language
  Models through Principled Regularized Optimization
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization
Haoming Jiang
Pengcheng He
Weizhu Chen
Xiaodong Liu
Jianfeng Gao
T. Zhao
43
559
0
08 Nov 2019
Adversarial Attacks on Time-Series Intrusion Detection for Industrial
  Control Systems
Adversarial Attacks on Time-Series Intrusion Detection for Industrial Control Systems
Giulio Zizzo
C. Hankin
S. Maffeis
Kevin Jones
AAML
24
16
0
08 Nov 2019
Discovering Invariances in Healthcare Neural Networks
Discovering Invariances in Healthcare Neural Networks
M. T. Bahadori
Layne Price
OOD
21
0
0
08 Nov 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
44
68
0
06 Nov 2019
Towards Large yet Imperceptible Adversarial Image Perturbations with
  Perceptual Color Distance
Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance
Zhengyu Zhao
Zhuoran Liu
Martha Larson
AAML
18
143
0
06 Nov 2019
Intriguing Properties of Adversarial ML Attacks in the Problem Space
  [Extended Version]
Intriguing Properties of Adversarial ML Attacks in the Problem Space [Extended Version]
Jacopo Cortellazzi
Feargus Pendlebury
Daniel Arp
Erwin Quiring
Fabio Pierazzi
Lorenzo Cavallaro
AAML
35
0
0
05 Nov 2019
Coverage Guided Testing for Recurrent Neural Networks
Coverage Guided Testing for Recurrent Neural Networks
Wei Huang
Youcheng Sun
Xing-E. Zhao
James Sharp
Wenjie Ruan
Jie Meng
Xiaowei Huang
AAML
40
47
0
05 Nov 2019
DLA: Dense-Layer-Analysis for Adversarial Example Detection
DLA: Dense-Layer-Analysis for Adversarial Example Detection
Philip Sperl
Ching-yu Kao
Peng Chen
Konstantin Böttinger
AAML
19
34
0
05 Nov 2019
Visual Privacy Protection via Mapping Distortion
Visual Privacy Protection via Mapping Distortion
Yiming Li
Peidong Liu
Yong Jiang
Shutao Xia
36
10
0
05 Nov 2019
A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models
A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models
Ren Pang
Hua Shen
Xinyang Zhang
S. Ji
Yevgeniy Vorobeychik
Xiaopu Luo
Alex Liu
Ting Wang
AAML
19
2
0
05 Nov 2019
Ensembles of Locally Independent Prediction Models
Ensembles of Locally Independent Prediction Models
A. Ross
Weiwei Pan
Leo Anthony Celi
Finale Doshi-Velez
25
31
0
04 Nov 2019
Persistency of Excitation for Robustness of Neural Networks
Persistency of Excitation for Robustness of Neural Networks
Kamil Nar
S. Shankar Sastry
AAML
19
10
0
04 Nov 2019
Preventing Gradient Attenuation in Lipschitz Constrained Convolutional
  Networks
Preventing Gradient Attenuation in Lipschitz Constrained Convolutional Networks
Qiyang Li
Saminul Haque
Cem Anil
James Lucas
Roger C. Grosse
Joern-Henrik Jacobsen
28
114
0
03 Nov 2019
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Guangke Chen
Sen Chen
Lingling Fan
Xiaoning Du
Zhe Zhao
Fu Song
Yang Liu
AAML
19
194
0
03 Nov 2019
Online Robustness Training for Deep Reinforcement Learning
Online Robustness Training for Deep Reinforcement Learning
Marc Fischer
M. Mirman
Steven Stalder
Martin Vechev
OnRL
19
40
0
03 Nov 2019
MadNet: Using a MAD Optimization for Defending Against Adversarial
  Attacks
MadNet: Using a MAD Optimization for Defending Against Adversarial Attacks
Shai Rozenberg
G. Elidan
Ran El-Yaniv
AAML
14
1
0
03 Nov 2019
Adversarial Music: Real World Audio Adversary Against Wake-word
  Detection System
Adversarial Music: Real World Audio Adversary Against Wake-word Detection System
Juncheng Billy Li
Shuhui Qu
Xinjian Li
Joseph Szurley
J. Zico Kolter
Florian Metze
AAML
10
64
0
31 Oct 2019
Making an Invisibility Cloak: Real World Adversarial Attacks on Object
  Detectors
Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors
Zuxuan Wu
Ser-Nam Lim
L. Davis
Tom Goldstein
AAML
35
263
0
31 Oct 2019
Enhancing Certifiable Robustness via a Deep Model Ensemble
Enhancing Certifiable Robustness via a Deep Model Ensemble
Huan Zhang
Minhao Cheng
Cho-Jui Hsieh
33
9
0
31 Oct 2019
A Decentralized Proximal Point-type Method for Saddle Point Problems
A Decentralized Proximal Point-type Method for Saddle Point Problems
Weijie Liu
Aryan Mokhtari
Asuman Ozdaglar
S. Pattathil
Zebang Shen
Nenggan Zheng
72
30
0
31 Oct 2019
Investigating Resistance of Deep Learning-based IDS against Adversaries
  using min-max Optimization
Investigating Resistance of Deep Learning-based IDS against Adversaries using min-max Optimization
Rana Abou-Khamis
Omair Shafiq
Ashraf Matrawy
AAML
16
40
0
30 Oct 2019
Fault Tolerance of Neural Networks in Adversarial Settings
Fault Tolerance of Neural Networks in Adversarial Settings
Vasisht Duddu
N. Pillai
D. V. Rao
V. Balas
SILM
AAML
27
11
0
30 Oct 2019
Efficiently avoiding saddle points with zero order methods: No gradients
  required
Efficiently avoiding saddle points with zero order methods: No gradients required
Lampros Flokas
Emmanouil-Vasileios Vlatakis-Gkaragkounis
Georgios Piliouras
28
32
0
29 Oct 2019
Certified Adversarial Robustness for Deep Reinforcement Learning
Certified Adversarial Robustness for Deep Reinforcement Learning
Björn Lütjens
Michael Everett
Jonathan P. How
AAML
25
91
0
28 Oct 2019
Dr.VOT : Measuring Positive and Negative Voice Onset Time in the Wild
Dr.VOT : Measuring Positive and Negative Voice Onset Time in the Wild
Yosi Shrem
Matthew A. Goldrick
Joseph Keshet
6
12
0
27 Oct 2019
Adversarial Defense via Local Flatness Regularization
Adversarial Defense via Local Flatness Regularization
Jia Xu
Yiming Li
Yong-jia Jiang
Shutao Xia
AAML
25
17
0
27 Oct 2019
Understanding and Quantifying Adversarial Examples Existence in Linear
  Classification
Understanding and Quantifying Adversarial Examples Existence in Linear Classification
Xupeng Shi
A. Ding
AAML
22
3
0
27 Oct 2019
Effectiveness of random deep feature selection for securing image
  manipulation detectors against adversarial examples
Effectiveness of random deep feature selection for securing image manipulation detectors against adversarial examples
Mauro Barni
Ehsan Nowroozi
B. Tondi
Bowen Zhang
AAML
16
17
0
25 Oct 2019
A Simple Dynamic Learning Rate Tuning Algorithm For Automated Training
  of DNNs
A Simple Dynamic Learning Rate Tuning Algorithm For Automated Training of DNNs
Koyel Mukherjee
Alind Khare
Ashish Verma
27
15
0
25 Oct 2019
Label Smoothing and Logit Squeezing: A Replacement for Adversarial
  Training?
Label Smoothing and Logit Squeezing: A Replacement for Adversarial Training?
Ali Shafahi
Amin Ghiasi
Furong Huang
Tom Goldstein
AAML
27
40
0
25 Oct 2019
ATZSL: Defensive Zero-Shot Recognition in the Presence of Adversaries
ATZSL: Defensive Zero-Shot Recognition in the Presence of Adversaries
Xingxing Zhang
Shupeng Gui
Zhenfeng Zhu
Yao Zhao
Ji Liu
VLM
22
5
0
24 Oct 2019
Diametrical Risk Minimization: Theory and Computations
Diametrical Risk Minimization: Theory and Computations
Matthew Norton
J. Royset
38
19
0
24 Oct 2019
Wasserstein Smoothing: Certified Robustness against Wasserstein
  Adversarial Attacks
Wasserstein Smoothing: Certified Robustness against Wasserstein Adversarial Attacks
Alexander Levine
S. Feizi
AAML
12
61
0
23 Oct 2019
A Useful Taxonomy for Adversarial Robustness of Neural Networks
A Useful Taxonomy for Adversarial Robustness of Neural Networks
L. Smith
AAML
36
6
0
23 Oct 2019
Structure Matters: Towards Generating Transferable Adversarial Images
Structure Matters: Towards Generating Transferable Adversarial Images
Dan Peng
Zizhan Zheng
Linhao Luo
Xiaofeng Zhang
AAML
13
2
0
22 Oct 2019
An Alternative Surrogate Loss for PGD-based Adversarial Testing
An Alternative Surrogate Loss for PGD-based Adversarial Testing
Sven Gowal
J. Uesato
Chongli Qin
Po-Sen Huang
Timothy A. Mann
Pushmeet Kohli
AAML
52
89
0
21 Oct 2019
Improving Sequence Modeling Ability of Recurrent Neural Networks via
  Sememes
Improving Sequence Modeling Ability of Recurrent Neural Networks via Sememes
Yujia Qin
Fanchao Qi
Sicong Ouyang
Zhiyuan Liu
Cheng Yang
Yasheng Wang
Qun Liu
Maosong Sun
28
5
0
20 Oct 2019
Adversarial Attacks on Spoofing Countermeasures of automatic speaker
  verification
Adversarial Attacks on Spoofing Countermeasures of automatic speaker verification
Songxiang Liu
Haibin Wu
Hung-yi Lee
Helen Meng
AAML
36
65
0
19 Oct 2019
Are Perceptually-Aligned Gradients a General Property of Robust
  Classifiers?
Are Perceptually-Aligned Gradients a General Property of Robust Classifiers?
Simran Kaur
Jeremy M. Cohen
Zachary Chase Lipton
OOD
AAML
27
65
0
18 Oct 2019
A Fast Saddle-Point Dynamical System Approach to Robust Deep Learning
A Fast Saddle-Point Dynamical System Approach to Robust Deep Learning
Yasaman Esfandiari
Aditya Balu
K. Ebrahimi
Umesh Vaidya
N. Elia
Soumik Sarkar
OOD
28
3
0
18 Oct 2019
Previous
123...115116117...129130131
Next