ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.06083
  4. Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks
v1v2v3v4 (latest)

Towards Deep Learning Models Resistant to Adversarial Attacks

19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
    SILMOOD
ArXiv (abs)PDFHTMLGithub (752★)

Papers citing "Towards Deep Learning Models Resistant to Adversarial Attacks"

50 / 6,612 papers shown
Title
Improving Adversarial Robustness via Guided Complement Entropy
Improving Adversarial Robustness via Guided Complement Entropy
Hao-Yun Chen
Jhao-Hong Liang
Shih-Chieh Chang
Jia Pan
Yu-Ting Chen
Wei Wei
Da-Cheng Juan
AAML
67
49
0
23 Mar 2019
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic
  Speech Recognition
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
Yao Qin
Nicholas Carlini
Ian Goodfellow
G. Cottrell
Colin Raffel
AAML
107
381
0
22 Mar 2019
Adversarial camera stickers: A physical camera-based attack on deep
  learning systems
Adversarial camera stickers: A physical camera-based attack on deep learning systems
Juncheng Billy Li
Frank R. Schmidt
J. Zico Kolter
AAML
85
168
0
21 Mar 2019
Interpreting Neural Networks Using Flip Points
Interpreting Neural Networks Using Flip Points
Roozbeh Yousefzadeh
D. O’Leary
AAMLFAtt
47
17
0
21 Mar 2019
Provable Certificates for Adversarial Examples: Fitting a Ball in the
  Union of Polytopes
Provable Certificates for Adversarial Examples: Fitting a Ball in the Union of Polytopes
Matt Jordan
Justin Lewis
A. Dimakis
AAML
79
57
0
20 Mar 2019
Implicit Generation and Generalization in Energy-Based Models
Implicit Generation and Generalization in Energy-Based Models
Yilun Du
Igor Mordatch
BDLDiffM
74
40
0
20 Mar 2019
On the Robustness of Deep K-Nearest Neighbors
On the Robustness of Deep K-Nearest Neighbors
Chawin Sitawarin
David Wagner
AAMLOOD
140
58
0
20 Mar 2019
On Certifying Non-uniform Bound against Adversarial Attacks
On Certifying Non-uniform Bound against Adversarial Attacks
Chen Liu
Ryota Tomioka
Volkan Cevher
AAML
79
19
0
15 Mar 2019
A Research Agenda: Dynamic Models to Defend Against Correlated Attacks
A Research Agenda: Dynamic Models to Defend Against Correlated Attacks
Ian Goodfellow
AAMLOOD
85
31
0
14 Mar 2019
Attribution-driven Causal Analysis for Detection of Adversarial Examples
Attribution-driven Causal Analysis for Detection of Adversarial Examples
Susmit Jha
Sunny Raj
S. Fernandes
Sumit Kumar Jha
S. Jha
Gunjan Verma
B. Jalaeian
A. Swami
AAML
75
17
0
14 Mar 2019
Semantics Preserving Adversarial Learning
Semantics Preserving Adversarial Learning
Ousmane Amadou Dia
Elnaz Barshan
Reza Babanezhad
AAMLGAN
113
2
0
10 Mar 2019
GanDef: A GAN based Adversarial Training Defense for Neural Network
  Classifier
GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier
Guanxiong Liu
Issa M. Khalil
Abdallah Khreishah
GANAAML
65
19
0
06 Mar 2019
Detecting Overfitting via Adversarial Examples
Detecting Overfitting via Adversarial Examples
Roman Werpachowski
András Gyorgy
Csaba Szepesvári
TDI
86
45
0
06 Mar 2019
Negative Training for Neural Dialogue Response Generation
Negative Training for Neural Dialogue Response Generation
Tianxing He
James R. Glass
87
61
0
06 Mar 2019
Statistical Guarantees for the Robustness of Bayesian Neural Networks
Statistical Guarantees for the Robustness of Bayesian Neural Networks
L. Cardelli
Marta Kwiatkowska
Luca Laurenti
Nicola Paoletti
A. Patané
Matthew Wicker
AAML
89
54
0
05 Mar 2019
Defense Against Adversarial Images using Web-Scale Nearest-Neighbor
  Search
Defense Against Adversarial Images using Web-Scale Nearest-Neighbor Search
Abhimanyu Dubey
Laurens van der Maaten
Zeki Yalniz
Yixuan Li
D. Mahajan
AAML
115
66
0
05 Mar 2019
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial
  Perturbations
A Kernelized Manifold Mapping to Diminish the Effect of Adversarial Perturbations
Saeid Asgari Taghanaki
Kumar Abhishek
Shekoofeh Azizi
Ghassan Hamarneh
AAML
89
41
0
03 Mar 2019
PuVAE: A Variational Autoencoder to Purify Adversarial Examples
PuVAE: A Variational Autoencoder to Purify Adversarial Examples
Uiwon Hwang
Jaewoo Park
Hyemi Jang
Sungroh Yoon
N. Cho
AAML
75
77
0
02 Mar 2019
On the Effectiveness of Low Frequency Perturbations
On the Effectiveness of Low Frequency Perturbations
Yash Sharma
G. Ding
Marcus A. Brubaker
AAML
92
126
0
28 Feb 2019
Enhancing the Robustness of Deep Neural Networks by Boundary Conditional
  GAN
Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN
Ke Sun
Zhanxing Zhu
Zhouchen Lin
AAML
62
20
0
28 Feb 2019
Towards Understanding Adversarial Examples Systematically: Exploring
  Data Size, Task and Model Factors
Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors
Ke Sun
Zhanxing Zhu
Zhouchen Lin
AAML
76
18
0
28 Feb 2019
Adversarial Attack and Defense on Point Sets
Adversarial Attack and Defense on Point Sets
Jiancheng Yang
Qiang Zhang
Rongyao Fang
Bingbing Ni
Jinxian Liu
Qi Tian
3DPC
112
125
0
28 Feb 2019
Adversarial Attacks on Time Series
Adversarial Attacks on Time Series
Fazle Karim
Somshubra Majumdar
H. Darabi
AI4TS
94
100
0
27 Feb 2019
The Best Defense Is a Good Offense: Adversarial Attacks to Avoid
  Modulation Detection
The Best Defense Is a Good Offense: Adversarial Attacks to Avoid Modulation Detection
Muhammad Zaid Hameed
András Gyorgy
Deniz Gunduz
AAML
83
73
0
27 Feb 2019
Robust Decision Trees Against Adversarial Examples
Robust Decision Trees Against Adversarial Examples
Hongge Chen
Huan Zhang
Duane S. Boning
Cho-Jui Hsieh
AAML
142
117
0
27 Feb 2019
Verification of Non-Linear Specifications for Neural Networks
Verification of Non-Linear Specifications for Neural Networks
Chongli Qin
Krishnamurthy Dvijotham
Dvijotham
Brendan O'Donoghue
Rudy Bunel
Robert Stanforth
Sven Gowal
J. Uesato
G. Swirszcz
Pushmeet Kohli
AAML
68
44
0
25 Feb 2019
Adversarial attacks hidden in plain sight
Adversarial attacks hidden in plain sight
Jan Philip Göpfert
André Artelt
H. Wersing
Barbara Hammer
AAML
46
17
0
25 Feb 2019
Adversarial Reinforcement Learning under Partial Observability in
  Autonomous Computer Network Defence
Adversarial Reinforcement Learning under Partial Observability in Autonomous Computer Network Defence
Yi Han
David Hubczenko
Paul Montague
O. Vel
Tamas Abraham
Benjamin I. P. Rubinstein
C. Leckie
T. Alpcan
S. Erfani
AAML
54
6
0
25 Feb 2019
A Convex Relaxation Barrier to Tight Robustness Verification of Neural
  Networks
A Convex Relaxation Barrier to Tight Robustness Verification of Neural Networks
Hadi Salman
Greg Yang
Huan Zhang
Cho-Jui Hsieh
Pengchuan Zhang
AAML
148
271
0
23 Feb 2019
On the Sensitivity of Adversarial Robustness to Input Data Distributions
On the Sensitivity of Adversarial Robustness to Input Data Distributions
G. Ding
Kry Yik-Chau Lui
Xiaomeng Jin
Luyu Wang
Ruitong Huang
OOD
64
60
0
22 Feb 2019
Solving a Class of Non-Convex Min-Max Games Using Iterative First Order
  Methods
Solving a Class of Non-Convex Min-Max Games Using Iterative First Order Methods
Maher Nouiehed
Maziar Sanjabi
Tianjian Huang
Jason D. Lee
Meisam Razaviyayn
113
344
0
21 Feb 2019
Quantifying Perceptual Distortion of Adversarial Examples
Quantifying Perceptual Distortion of Adversarial Examples
Matt Jordan
N. Manoj
Surbhi Goel
A. Dimakis
68
39
0
21 Feb 2019
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Eric Wong
Frank R. Schmidt
J. Zico Kolter
AAML
95
211
0
21 Feb 2019
Perceptual Quality-preserving Black-Box Attack against Deep Learning
  Image Classifiers
Perceptual Quality-preserving Black-Box Attack against Deep Learning Image Classifiers
Diego Gragnaniello
Francesco Marra
Giovanni Poggi
L. Verdoliva
AAML
35
30
0
20 Feb 2019
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch
advertorch v0.1: An Adversarial Robustness Toolbox based on PyTorch
G. Ding
Luyu Wang
Xiaomeng Jin
74
183
0
20 Feb 2019
Graph Adversarial Training: Dynamically Regularizing Based on Graph
  Structure
Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure
Fuli Feng
Xiangnan He
Jie Tang
Tat-Seng Chua
OODAAML
116
221
0
20 Feb 2019
There are No Bit Parts for Sign Bits in Black-Box Attacks
There are No Bit Parts for Sign Bits in Black-Box Attacks
Abdullah Al-Dujaili
Una-May O’Reilly
AAML
116
20
0
19 Feb 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELMAAML
147
905
0
18 Feb 2019
Mockingbird: Defending Against Deep-Learning-Based Website
  Fingerprinting Attacks with Adversarial Traces
Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks with Adversarial Traces
Mohammad Saidur Rahman
Mohsen Imani
Nate Mathews
M. Wright
AAML
86
81
0
18 Feb 2019
AuxBlocks: Defense Adversarial Example via Auxiliary Blocks
AuxBlocks: Defense Adversarial Example via Auxiliary Blocks
Yueyao Yu
Pengfei Yu
Wenye Li
AAML
18
6
0
18 Feb 2019
Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing
  AutoEncoder Pre-training
Mitigation of Adversarial Examples in RF Deep Classifiers Utilizing AutoEncoder Pre-training
S. Kokalj-Filipovic
Rob Miller
Nicholas Chang
Chi Leung Lau
AAML
54
41
0
16 Feb 2019
Adversarial Examples in RF Deep Learning: Detection of the Attack and
  its Physical Robustness
Adversarial Examples in RF Deep Learning: Detection of the Attack and its Physical Robustness
S. Kokalj-Filipovic
Rob Miller
AAML
60
31
0
16 Feb 2019
Do ImageNet Classifiers Generalize to ImageNet?
Do ImageNet Classifiers Generalize to ImageNet?
Benjamin Recht
Rebecca Roelofs
Ludwig Schmidt
Vaishaal Shankar
OODSSegVLM
138
1,733
0
13 Feb 2019
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
The Odds are Odd: A Statistical Test for Detecting Adversarial Examples
Kevin Roth
Yannic Kilcher
Thomas Hofmann
AAML
80
176
0
13 Feb 2019
Examining Adversarial Learning against Graph-based IoT Malware Detection
  Systems
Examining Adversarial Learning against Graph-based IoT Malware Detection Systems
Ahmed A. Abusnaina
Aminollah Khormali
Hisham Alasmary
Jeman Park
Afsah Anwar
Ulku Meteriz
Aziz Mohaisen
AAML
45
5
0
12 Feb 2019
Towards a Robust Deep Neural Network in Texts: A Survey
Towards a Robust Deep Neural Network in Texts: A Survey
Wenqi Wang
Benxiao Tang
Run Wang
Lina Wang
Aoshuang Ye
AAML
99
39
0
12 Feb 2019
VC Classes are Adversarially Robustly Learnable, but Only Improperly
VC Classes are Adversarially Robustly Learnable, but Only Improperly
Omar Montasser
Steve Hanneke
Nathan Srebro
91
141
0
12 Feb 2019
Model Compression with Adversarial Robustness: A Unified Optimization
  Framework
Model Compression with Adversarial Robustness: A Unified Optimization Framework
Shupeng Gui
Haotao Wang
Chen Yu
Haichuan Yang
Zhangyang Wang
Ji Liu
MQ
79
139
0
10 Feb 2019
Image Decomposition and Classification through a Generative Model
Image Decomposition and Classification through a Generative Model
Houpu Yao
Malcolm Regan
Yezhou Yang
Yi Ren
GAN
37
1
0
09 Feb 2019
Minimal Images in Deep Neural Networks: Fragile Object Recognition in
  Natural Images
Minimal Images in Deep Neural Networks: Fragile Object Recognition in Natural Images
S. Srivastava
Guy Ben-Yosef
Xavier Boix
AAML
60
27
0
08 Feb 2019
Previous
123...125126127...131132133
Next