ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.06083
  4. Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks

Towards Deep Learning Models Resistant to Adversarial Attacks

19 June 2017
A. Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
    SILM
    OOD
ArXivPDFHTML

Papers citing "Towards Deep Learning Models Resistant to Adversarial Attacks"

50 / 6,518 papers shown
Title
A Survey of Safety and Trustworthiness of Deep Neural Networks:
  Verification, Testing, Adversarial Attack and Defence, and Interpretability
A Survey of Safety and Trustworthiness of Deep Neural Networks: Verification, Testing, Adversarial Attack and Defence, and Interpretability
Xiaowei Huang
Daniel Kroening
Wenjie Ruan
Marta Kwiatkowska
Youcheng Sun
Emese Thamo
Min Wu
Xinping Yi
AAML
24
50
0
18 Dec 2018
Spartan Networks: Self-Feature-Squeezing Neural Networks for increased
  robustness in adversarial settings
Spartan Networks: Self-Feature-Squeezing Neural Networks for increased robustness in adversarial settings
François Menet
Paul Berthier
José M. Fernandez
M. Gagnon
AAML
10
10
0
17 Dec 2018
Perturbation Analysis of Learning Algorithms: A Unifying Perspective on
  Generation of Adversarial Examples
Perturbation Analysis of Learning Algorithms: A Unifying Perspective on Generation of Adversarial Examples
E. Balda
Arash Behboodi
R. Mathar
AAML
28
4
0
15 Dec 2018
Adversarial Sample Detection for Deep Neural Network through Model
  Mutation Testing
Adversarial Sample Detection for Deep Neural Network through Model Mutation Testing
Jingyi Wang
Guoliang Dong
Jun Sun
Xinyu Wang
Peixin Zhang
AAML
14
190
0
14 Dec 2018
Why ReLU networks yield high-confidence predictions far away from the
  training data and how to mitigate the problem
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
55
554
0
13 Dec 2018
Thwarting Adversarial Examples: An $L_0$-RobustSparse Fourier Transform
Thwarting Adversarial Examples: An L0L_0L0​-RobustSparse Fourier Transform
Mitali Bafna
Jack Murtagh
Nikhil Vyas
AAML
13
48
0
12 Dec 2018
On the Security of Randomized Defenses Against Adversarial Samples
On the Security of Randomized Defenses Against Adversarial Samples
K. Sharad
G. Marson
H. Truong
Ghassan O. Karame
AAML
32
1
0
11 Dec 2018
Defending Against Universal Perturbations With Shared Adversarial
  Training
Defending Against Universal Perturbations With Shared Adversarial Training
Chaithanya Kumar Mummadi
Thomas Brox
J. H. Metzen
AAML
18
60
0
10 Dec 2018
Learning Transferable Adversarial Examples via Ghost Networks
Learning Transferable Adversarial Examples via Ghost Networks
Yingwei Li
S. Bai
Yuyin Zhou
Cihang Xie
Zhishuai Zhang
Alan Yuille
AAML
42
136
0
09 Dec 2018
Feature Denoising for Improving Adversarial Robustness
Feature Denoising for Improving Adversarial Robustness
Cihang Xie
Yuxin Wu
Laurens van der Maaten
Alan Yuille
Kaiming He
44
904
0
09 Dec 2018
AutoGAN: Robust Classifier Against Adversarial Attacks
AutoGAN: Robust Classifier Against Adversarial Attacks
Blerta Lindqvist
Shridatt Sugrim
R. Izmailov
AAML
29
7
0
08 Dec 2018
Adversarial Attacks, Regression, and Numerical Stability Regularization
Adversarial Attacks, Regression, and Numerical Stability Regularization
A. Nguyen
Edward Raff
AAML
13
29
0
07 Dec 2018
Fooling Network Interpretation in Image Classification
Fooling Network Interpretation in Image Classification
Akshayvarun Subramanya
Vipin Pillai
Hamed Pirsiavash
AAML
FAtt
6
7
0
06 Dec 2018
MMA Training: Direct Input Space Margin Maximization through Adversarial
  Training
MMA Training: Direct Input Space Margin Maximization through Adversarial Training
G. Ding
Yash Sharma
Kry Yik-Chau Lui
Ruitong Huang
AAML
27
270
0
06 Dec 2018
On Configurable Defense against Adversarial Example Attacks
On Configurable Defense against Adversarial Example Attacks
Bo Luo
Min Li
Yu Li
Q. Xu
AAML
13
1
0
06 Dec 2018
Random Spiking and Systematic Evaluation of Defenses Against Adversarial
  Examples
Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples
Huangyi Ge
Sze Yiu Chau
Bruno Ribeiro
Ninghui Li
AAML
27
1
0
05 Dec 2018
Rigorous Agent Evaluation: An Adversarial Approach to Uncover
  Catastrophic Failures
Rigorous Agent Evaluation: An Adversarial Approach to Uncover Catastrophic Failures
Junhui Yin
Jiayan Qiu
Csaba Szepesvári
Siqing Zhang
Avraham Ruderman
Jiyang Xie
Krishnamurthy Dvijotham
Zhanyu Ma
N. Heess
Pushmeet Kohli
AAML
15
80
0
04 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAML
AI4CE
30
169
0
03 Dec 2018
Disentangling Adversarial Robustness and Generalization
Disentangling Adversarial Robustness and Generalization
David Stutz
Matthias Hein
Bernt Schiele
AAML
OOD
194
275
0
03 Dec 2018
SentiNet: Detecting Localized Universal Attacks Against Deep Learning
  Systems
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
182
288
0
02 Dec 2018
FineFool: Fine Object Contour Attack via Attention
FineFool: Fine Object Contour Attack via Attention
Jinyin Chen
Haibin Zheng
Hui Xiong
Mengmeng Su
AAML
25
3
0
01 Dec 2018
Effects of Loss Functions And Target Representations on Adversarial
  Robustness
Effects of Loss Functions And Target Representations on Adversarial Robustness
Sean Saito
S. Roy
AAML
16
7
0
01 Dec 2018
Discrete Adversarial Attacks and Submodular Optimization with
  Applications to Text Classification
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification
Qi Lei
Lingfei Wu
Pin-Yu Chen
A. Dimakis
Inderjit S. Dhillon
Michael Witbrock
AAML
18
92
0
01 Dec 2018
ComDefend: An Efficient Image Compression Model to Defend Adversarial
  Examples
ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples
Xiaojun Jia
Xingxing Wei
Xiaochun Cao
H. Foroosh
AAML
69
264
0
30 Nov 2018
CNN-Cert: An Efficient Framework for Certifying Robustness of
  Convolutional Neural Networks
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
Akhilan Boopathy
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Luca Daniel
AAML
108
138
0
29 Nov 2018
A randomized gradient-free attack on ReLU networks
A randomized gradient-free attack on ReLU networks
Francesco Croce
Matthias Hein
AAML
37
21
0
28 Nov 2018
Universal Adversarial Training
Universal Adversarial Training
A. Mendrik
Mahyar Najibi
Zheng Xu
John P. Dickerson
L. Davis
Tom Goldstein
AAML
OOD
24
189
0
27 Nov 2018
Robust Classification of Financial Risk
Robust Classification of Financial Risk
Suproteem K. Sarkar
Kojin Oshiba
Daniel Giebisch
Yaron Singer
AAML
OOD
14
14
0
27 Nov 2018
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks
Jinghui Chen
Dongruo Zhou
Jinfeng Yi
Quanquan Gu
AAML
20
68
0
27 Nov 2018
ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and
  Robust Accuracies
ResNets Ensemble via the Feynman-Kac Formalism to Improve Natural and Robust Accuracies
Bao Wang
Binjie Yuan
Zuoqiang Shi
Stanley J. Osher
AAML
OOD
16
15
0
26 Nov 2018
Bilateral Adversarial Training: Towards Fast Training of More Robust
  Models Against Adversarial Attacks
Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks
Jianyu Wang
Haichao Zhang
OOD
AAML
32
118
0
26 Nov 2018
Noisy Computations during Inference: Harmful or Helpful?
Noisy Computations during Inference: Harmful or Helpful?
Minghai Qin
D. Vučinić
AAML
16
5
0
26 Nov 2018
Attention, Please! Adversarial Defense via Activation Rectification and
  Preservation
Attention, Please! Adversarial Defense via Activation Rectification and Preservation
Shangxi Wu
Jitao Sang
Kaiyuan Xu
Jiaming Zhang
Jian Yu
AAML
6
7
0
24 Nov 2018
Robustness via curvature regularization, and vice versa
Robustness via curvature regularization, and vice versa
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
J. Uesato
P. Frossard
AAML
29
318
0
23 Nov 2018
Decoupling Direction and Norm for Efficient Gradient-Based L2
  Adversarial Attacks and Defenses
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses
Jérôme Rony
L. G. Hafemann
Luiz Eduardo Soares de Oliveira
Ismail Ben Ayed
R. Sabourin
Eric Granger
AAML
9
297
0
23 Nov 2018
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural
  Network Robustness against Adversarial Attack
Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack
Adnan Siraj Rakin
Zhezhi He
Deliang Fan
AAML
13
287
0
22 Nov 2018
Strength in Numbers: Trading-off Robustness and Computation via
  Adversarially-Trained Ensembles
Strength in Numbers: Trading-off Robustness and Computation via Adversarially-Trained Ensembles
Edward Grefenstette
Robert Stanforth
Brendan O'Donoghue
J. Uesato
G. Swirszcz
Pushmeet Kohli
AAML
36
18
0
22 Nov 2018
Task-generalizable Adversarial Attack based on Perceptual Metric
Task-generalizable Adversarial Attack based on Perceptual Metric
Muzammal Naseer
Salman H. Khan
Shafin Rahman
Fatih Porikli
AAML
21
39
0
22 Nov 2018
MimicGAN: Corruption-Mimicking for Blind Image Recovery & Adversarial
  Defense
MimicGAN: Corruption-Mimicking for Blind Image Recovery & Adversarial Defense
Rushil Anirudh
Jayaraman J. Thiagarajan
B. Kailkhura
T. Bremer
GAN
8
2
0
20 Nov 2018
Intermediate Level Adversarial Attack for Enhanced Transferability
Intermediate Level Adversarial Attack for Enhanced Transferability
Qian Huang
Zeqi Gu
Isay Katsman
Horace He
Pian Pawakapan
Zhiqiu Lin
Serge J. Belongie
Ser-Nam Lim
AAML
SILM
11
4
0
20 Nov 2018
Lightweight Lipschitz Margin Training for Certified Defense against
  Adversarial Examples
Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples
Hajime Ono
Tsubasa Takahashi
Kazuya Kakizaki
AAML
16
4
0
20 Nov 2018
Optimal Transport Classifier: Defending Against Adversarial Attacks by
  Regularized Deep Embedding
Optimal Transport Classifier: Defending Against Adversarial Attacks by Regularized Deep Embedding
Yao Li
Martin Renqiang Min
Wenchao Yu
Cho-Jui Hsieh
T. C. Lee
E. Kruus
OT
24
7
0
19 Nov 2018
Scalable agent alignment via reward modeling: a research direction
Scalable agent alignment via reward modeling: a research direction
Jan Leike
David M. Krueger
Tom Everitt
Miljan Martic
Vishal Maini
Shane Legg
34
397
0
19 Nov 2018
Generalizable Adversarial Training via Spectral Normalization
Generalizable Adversarial Training via Spectral Normalization
Farzan Farnia
Jesse M. Zhang
David Tse
OOD
AAML
45
138
0
19 Nov 2018
A Statistical Approach to Assessing Neural Network Robustness
A Statistical Approach to Assessing Neural Network Robustness
Stefan Webb
Tom Rainforth
Yee Whye Teh
M. P. Kumar
AAML
11
81
0
17 Nov 2018
A note on hyperparameters in black-box adversarial examples
A note on hyperparameters in black-box adversarial examples
Jamie Hayes
AAML
MLAU
14
0
0
15 Nov 2018
A Spectral View of Adversarially Robust Features
A Spectral View of Adversarially Robust Features
Shivam Garg
Vatsal Sharan
B. Zhang
Gregory Valiant
AAML
14
21
0
15 Nov 2018
Mathematical Analysis of Adversarial Attacks
Mathematical Analysis of Adversarial Attacks
Zehao Dou
Stanley J. Osher
Bao Wang
AAML
24
18
0
15 Nov 2018
Verification of Recurrent Neural Networks Through Rule Extraction
Verification of Recurrent Neural Networks Through Rule Extraction
Qinglong Wang
Kaixuan Zhang
Xue Liu
C. Lee Giles
AAML
28
18
0
14 Nov 2018
Sorting out Lipschitz function approximation
Sorting out Lipschitz function approximation
Cem Anil
James Lucas
Roger C. Grosse
30
318
0
13 Nov 2018
Previous
123...125126127...129130131
Next