ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.06083
  4. Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks

Towards Deep Learning Models Resistant to Adversarial Attacks

19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
    SILM
    OOD
ArXivPDFHTML

Papers citing "Towards Deep Learning Models Resistant to Adversarial Attacks"

50 / 6,519 papers shown
Title
Fooling a Real Car with Adversarial Traffic Signs
Fooling a Real Car with Adversarial Traffic Signs
N. Morgulis
Alexander Kreines
Shachar Mendelowitz
Yuval Weisglass
AAML
16
91
0
30 Jun 2019
Training individually fair ML models with Sensitive Subspace Robustness
Training individually fair ML models with Sensitive Subspace Robustness
Mikhail Yurochkin
Amanda Bower
Yuekai Sun
FaML
OOD
27
119
0
28 Jun 2019
Using Self-Supervised Learning Can Improve Model Robustness and
  Uncertainty
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty
Dan Hendrycks
Mantas Mazeika
Saurav Kadavath
D. Song
OOD
SSL
10
936
0
28 Jun 2019
Using Intuition from Empirical Properties to Simplify Adversarial
  Training Defense
Using Intuition from Empirical Properties to Simplify Adversarial Training Defense
Guanxiong Liu
Issa M. Khalil
Abdallah Khreishah
AAML
20
2
0
27 Jun 2019
Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Evolving Robust Neural Architectures to Defend from Adversarial Attacks
Shashank Kotyan
Danilo Vasconcellos Vargas
OOD
AAML
24
36
0
27 Jun 2019
Invariance-inducing regularization using worst-case transformations
  suffices to boost accuracy and spatial robustness
Invariance-inducing regularization using worst-case transformations suffices to boost accuracy and spatial robustness
Fanny Yang
Zuowen Wang
C. Heinze-Deml
28
42
0
26 Jun 2019
Defending Adversarial Attacks by Correcting logits
Defending Adversarial Attacks by Correcting logits
Yifeng Li
Lingxi Xie
Ya Zhang
Rui Zhang
Yanfeng Wang
Qi Tian
AAML
29
5
0
26 Jun 2019
Prediction Poisoning: Towards Defenses Against DNN Model Stealing
  Attacks
Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks
Tribhuvanesh Orekondy
Bernt Schiele
Mario Fritz
AAML
19
164
0
26 Jun 2019
Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case
  Study on CNN-Based Lithographic Hotspot Detection
Are Adversarial Perturbations a Showstopper for ML-Based CAD? A Case Study on CNN-Based Lithographic Hotspot Detection
Kang Liu
Haoyu Yang
Yuzhe Ma
Benjamin Tan
Bei Yu
Evangeline F. Y. Young
Ramesh Karri
S. Garg
AAML
20
10
0
25 Jun 2019
Defending Against Adversarial Examples with K-Nearest Neighbor
Chawin Sitawarin
David Wagner
AAML
11
29
0
23 Jun 2019
A Fourier Perspective on Model Robustness in Computer Vision
A Fourier Perspective on Model Robustness in Computer Vision
Dong Yin
Raphael Gontijo-Lopes
Jonathon Shlens
E. D. Cubuk
Justin Gilmer
OOD
37
490
0
21 Jun 2019
On Physical Adversarial Patches for Object Detection
On Physical Adversarial Patches for Object Detection
Mark Lee
Zico Kolter
AAML
19
166
0
20 Jun 2019
Improving the robustness of ImageNet classifiers using elements of human
  visual cognition
Improving the robustness of ImageNet classifiers using elements of human visual cognition
A. Orhan
Brenden M. Lake
VLM
11
5
0
20 Jun 2019
Cloud-based Image Classification Service Is Not Robust To Simple
  Transformations: A Forgotten Battlefield
Cloud-based Image Classification Service Is Not Robust To Simple Transformations: A Forgotten Battlefield
Dou Goodman
Tao Wei
AAML
27
6
0
19 Jun 2019
A unified view on differential privacy and robustness to adversarial
  examples
A unified view on differential privacy and robustness to adversarial examples
Rafael Pinot
Florian Yger
Cédric Gouy-Pailler
Jamal Atif
AAML
21
17
0
19 Jun 2019
SemanticAdv: Generating Adversarial Examples via Attribute-conditional
  Image Editing
SemanticAdv: Generating Adversarial Examples via Attribute-conditional Image Editing
Haonan Qiu
Chaowei Xiao
Lei Yang
Xinchen Yan
Honglak Lee
Yue Liu
AAML
28
170
0
19 Jun 2019
Global Adversarial Attacks for Assessing Deep Learning Robustness
Global Adversarial Attacks for Assessing Deep Learning Robustness
Hanbin Hu
Mitt Shah
Jianhua Z. Huang
Peng Li
AAML
28
4
0
19 Jun 2019
Convergence of Adversarial Training in Overparametrized Neural Networks
Convergence of Adversarial Training in Overparametrized Neural Networks
Ruiqi Gao
Tianle Cai
Haochuan Li
Liwei Wang
Cho-Jui Hsieh
Jason D. Lee
AAML
21
107
0
19 Jun 2019
The Attack Generator: A Systematic Approach Towards Constructing
  Adversarial Attacks
The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks
F. Assion
Peter Schlicht
Florens Greßner
W. Günther
Fabian Hüger
Nico M. Schmidt
Umair Rasheed
AAML
25
14
0
17 Jun 2019
Improving Black-box Adversarial Attacks with a Transfer-based Prior
Improving Black-box Adversarial Attacks with a Transfer-based Prior
Shuyu Cheng
Yinpeng Dong
Tianyu Pang
Hang Su
Jun Zhu
AAML
34
271
0
17 Jun 2019
Interpolated Adversarial Training: Achieving Robust Neural Networks
  without Sacrificing Too Much Accuracy
Interpolated Adversarial Training: Achieving Robust Neural Networks without Sacrificing Too Much Accuracy
Alex Lamb
Vikas Verma
Kenji Kawaguchi
Alexander Matyasko
Savya Khosla
Arno Solin
Yoshua Bengio
AAML
30
99
0
16 Jun 2019
Defending Against Adversarial Attacks Using Random Forests
Defending Against Adversarial Attacks Using Random Forests
Yifan Ding
Liqiang Wang
Huan Zhang
Jinfeng Yi
Deliang Fan
Boqing Gong
AAML
21
14
0
16 Jun 2019
Representation Quality Of Neural Networks Links To Adversarial Attacks
  and Defences
Representation Quality Of Neural Networks Links To Adversarial Attacks and Defences
Shashank Kotyan
Danilo Vasconcellos Vargas
Moe Matsuki
12
0
0
15 Jun 2019
Robust or Private? Adversarial Training Makes Models More Vulnerable to
  Privacy Attacks
Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks
Felipe A. Mejia
Paul Gamble
Z. Hampel-Arias
M. Lomnitz
Nina Lopatina
Lucas Tindall
M. Barrios
SILM
27
18
0
15 Jun 2019
Towards Stable and Efficient Training of Verifiably Robust Neural
  Networks
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Huan Zhang
Hongge Chen
Chaowei Xiao
Sven Gowal
Robert Stanforth
Yue Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
17
344
0
14 Jun 2019
Distributionally Robust Counterfactual Risk Minimization
Distributionally Robust Counterfactual Risk Minimization
Louis Faury
Ugo Tanielian
Flavian Vasile
E. Smirnova
Elvis Dohmatob
28
45
0
14 Jun 2019
Towards Compact and Robust Deep Neural Networks
Towards Compact and Robust Deep Neural Networks
Vikash Sehwag
Shiqi Wang
Prateek Mittal
Suman Jana
AAML
35
40
0
14 Jun 2019
Copy and Paste: A Simple But Effective Initialization Method for
  Black-Box Adversarial Attacks
Copy and Paste: A Simple But Effective Initialization Method for Black-Box Adversarial Attacks
T. Brunner
Frederik Diehl
Alois Knoll
AAML
14
8
0
14 Jun 2019
Adversarial Training Can Hurt Generalization
Adversarial Training Can Hurt Generalization
Aditi Raghunathan
Sang Michael Xie
Fanny Yang
John C. Duchi
Percy Liang
24
240
0
14 Jun 2019
Adversarial Robustness Assessment: Why both $L_0$ and $L_\infty$ Attacks
  Are Necessary
Adversarial Robustness Assessment: Why both L0L_0L0​ and L∞L_\inftyL∞​ Attacks Are Necessary
Shashank Kotyan
Danilo Vasconcellos Vargas
AAML
17
8
0
14 Jun 2019
Lower Bounds for Adversarially Robust PAC Learning
Lower Bounds for Adversarially Robust PAC Learning
Dimitrios I. Diochnos
Saeed Mahloujifar
Mohammad Mahmoody
AAML
27
26
0
13 Jun 2019
A Computationally Efficient Method for Defending Adversarial Deep
  Learning Attacks
A Computationally Efficient Method for Defending Adversarial Deep Learning Attacks
R. Sahay
Rehana Mahfuz
Aly El Gamal
AAML
22
5
0
13 Jun 2019
Tight Certificates of Adversarial Robustness for Randomly Smoothed
  Classifiers
Tight Certificates of Adversarial Robustness for Randomly Smoothed Classifiers
Guang-He Lee
Yang Yuan
Shiyu Chang
Tommi Jaakkola
AAML
25
122
0
12 Jun 2019
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural
  Networks
Efficient and Accurate Estimation of Lipschitz Constants for Deep Neural Networks
Mahyar Fazlyab
Alexander Robey
Hamed Hassani
M. Morari
George J. Pappas
39
450
0
12 Jun 2019
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient
  Black-box Attacks
Subspace Attack: Exploiting Promising Subspaces for Query-Efficient Black-box Attacks
Ziang Yan
Yiwen Guo
Changshui Zhang
AAML
36
110
0
11 Jun 2019
Topology Attack and Defense for Graph Neural Networks: An Optimization
  Perspective
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
Kaidi Xu
Hongge Chen
Sijia Liu
Pin-Yu Chen
Tsui-Wei Weng
Mingyi Hong
Xue Lin
AAML
13
449
0
10 Jun 2019
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual
  Perspective
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective
Lu Wang
Xuanqing Liu
Jinfeng Yi
Zhi-Hua Zhou
Cho-Jui Hsieh
AAML
39
22
0
10 Jun 2019
Robustness Verification of Tree-based Models
Robustness Verification of Tree-based Models
Hongge Chen
Huan Zhang
Si Si
Yang Li
Duane S. Boning
Cho-Jui Hsieh
AAML
22
76
0
10 Jun 2019
Intriguing properties of adversarial training at scale
Intriguing properties of adversarial training at scale
Cihang Xie
Alan Yuille
AAML
13
68
0
10 Jun 2019
Improved Adversarial Robustness via Logit Regularization Methods
Improved Adversarial Robustness via Logit Regularization Methods
Cecilia Summers
M. Dinneen
AAML
42
7
0
10 Jun 2019
Provably Robust Deep Learning via Adversarially Trained Smoothed
  Classifiers
Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers
Hadi Salman
Greg Yang
Jungshian Li
Pengchuan Zhang
Huan Zhang
Ilya P. Razenshteyn
Sébastien Bubeck
AAML
45
538
0
09 Jun 2019
Adversarial Attack Generation Empowered by Min-Max Optimization
Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang
Tianyun Zhang
Sijia Liu
Pin-Yu Chen
Jiacen Xu
M. Fardad
Yangqiu Song
AAML
30
35
0
09 Jun 2019
Provably Robust Boosted Decision Stumps and Trees against Adversarial
  Attacks
Provably Robust Boosted Decision Stumps and Trees against Adversarial Attacks
Maksym Andriushchenko
Matthias Hein
28
61
0
08 Jun 2019
ML-LOO: Detecting Adversarial Examples with Feature Attribution
ML-LOO: Detecting Adversarial Examples with Feature Attribution
Puyudi Yang
Jianbo Chen
Cho-Jui Hsieh
Jane-ling Wang
Michael I. Jordan
AAML
22
101
0
08 Jun 2019
Robustness for Non-Parametric Classification: A Generic Attack and
  Defense
Robustness for Non-Parametric Classification: A Generic Attack and Defense
Yao-Yuan Yang
Cyrus Rashtchian
Yizhen Wang
Kamalika Chaudhuri
SILM
AAML
34
42
0
07 Jun 2019
A cryptographic approach to black box adversarial machine learning
A cryptographic approach to black box adversarial machine learning
Kevin Shi
Daniel J. Hsu
Allison Bishop
AAML
16
3
0
07 Jun 2019
Kernelized Capsule Networks
Kernelized Capsule Networks
Taylor W. Killian
Justin A. Goodwin
Olivia M. Brown
Sung-Hyun Son
GAN
36
2
0
07 Jun 2019
Inductive Bias of Gradient Descent based Adversarial Training on
  Separable Data
Inductive Bias of Gradient Descent based Adversarial Training on Separable Data
Yan Li
Ethan X. Fang
Huan Xu
T. Zhao
17
16
0
07 Jun 2019
Adversarial Explanations for Understanding Image Classification
  Decisions and Improved Neural Network Robustness
Adversarial Explanations for Understanding Image Classification Decisions and Improved Neural Network Robustness
Walt Woods
Jack H Chen
C. Teuscher
AAML
18
46
0
07 Jun 2019
Robust Attacks against Multiple Classifiers
Robust Attacks against Multiple Classifiers
Juan C. Perdomo
Yaron Singer
AAML
18
10
0
06 Jun 2019
Previous
123...119120121...129130131
Next