ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1706.06083
  4. Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks

Towards Deep Learning Models Resistant to Adversarial Attacks

19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
    SILM
    OOD
ArXivPDFHTML

Papers citing "Towards Deep Learning Models Resistant to Adversarial Attacks"

50 / 6,519 papers shown
Title
Robust Local Features for Improving the Generalization of Adversarial
  Training
Robust Local Features for Improving the Generalization of Adversarial Training
Chuanbiao Song
Kun He
Jiadong Lin
Liwei Wang
J. Hopcroft
OOD
AAML
14
68
0
23 Sep 2019
COPYCAT: Practical Adversarial Attacks on Visualization-Based Malware
  Detection
COPYCAT: Practical Adversarial Attacks on Visualization-Based Malware Detection
Aminollah Khormali
Ahmed A. Abusnaina
Songqing Chen
Daehun Nyang
Aziz Mohaisen
AAML
34
28
0
20 Sep 2019
Defending Against Physically Realizable Attacks on Image Classification
Defending Against Physically Realizable Attacks on Image Classification
Tong Wu
Liang Tong
Yevgeniy Vorobeychik
AAML
25
125
0
20 Sep 2019
Representation Learning for Electronic Health Records
Representation Learning for Electronic Health Records
W. Weng
Peter Szolovits
36
19
0
19 Sep 2019
Training Robust Deep Neural Networks via Adversarial Noise Propagation
Training Robust Deep Neural Networks via Adversarial Noise Propagation
Aishan Liu
Xianglong Liu
Chongzhi Zhang
Hang Yu
Qiang Liu
Dacheng Tao
AAML
27
110
0
19 Sep 2019
Adversarial Vulnerability Bounds for Gaussian Process Classification
Adversarial Vulnerability Bounds for Gaussian Process Classification
M. Smith
Kathrin Grosse
Michael Backes
Mauricio A. Alvarez
AAML
16
9
0
19 Sep 2019
Absum: Simple Regularization Method for Reducing Structural Sensitivity
  of Convolutional Neural Networks
Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
Sekitoshi Kanai
Yasutoshi Ida
Yasuhiro Fujiwara
Masanori Yamada
S. Adachi
AAML
23
1
0
19 Sep 2019
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Han Xu
Yao Ma
Haochen Liu
Debayan Deb
Hui Liu
Jiliang Tang
Anil K. Jain
AAML
39
669
0
17 Sep 2019
Towards Quality Assurance of Software Product Lines with Adversarial
  Configurations
Towards Quality Assurance of Software Product Lines with Adversarial Configurations
Paul Temple
M. Acher
Gilles Perrouin
Battista Biggio
J. Jézéquel
Fabio Roli
AAML
22
11
0
16 Sep 2019
Interpreting and Improving Adversarial Robustness of Deep Neural
  Networks with Neuron Sensitivity
Interpreting and Improving Adversarial Robustness of Deep Neural Networks with Neuron Sensitivity
Chongzhi Zhang
Aishan Liu
Xianglong Liu
Yitao Xu
Hang Yu
Yuqing Ma
Tianlin Li
AAML
27
19
0
16 Sep 2019
Detecting Adversarial Samples Using Influence Functions and Nearest
  Neighbors
Detecting Adversarial Samples Using Influence Functions and Nearest Neighbors
Gilad Cohen
Guillermo Sapiro
Raja Giryes
TDI
19
124
0
15 Sep 2019
White-Box Adversarial Defense via Self-Supervised Data Estimation
White-Box Adversarial Defense via Self-Supervised Data Estimation
Zudi Lin
Hanspeter Pfister
Ziming Zhang
AAML
16
2
0
13 Sep 2019
Defending Against Adversarial Attacks by Suppressing the Largest
  Eigenvalue of Fisher Information Matrix
Defending Against Adversarial Attacks by Suppressing the Largest Eigenvalue of Fisher Information Matrix
Yaxin Peng
Chaomin Shen
Guixu Zhang
Jinsong Fan
AAML
14
13
0
13 Sep 2019
Towards Model-Agnostic Adversarial Defenses using Adversarially Trained
  Autoencoders
Towards Model-Agnostic Adversarial Defenses using Adversarially Trained Autoencoders
Pratik Vaishnavi
Kevin Eykholt
A. Prakash
Amir Rahmati
AAML
28
2
0
12 Sep 2019
Inspecting adversarial examples using the Fisher information
Inspecting adversarial examples using the Fisher information
Jörg Martin
Clemens Elster
AAML
23
15
0
12 Sep 2019
Feedback Learning for Improving the Robustness of Neural Networks
Feedback Learning for Improving the Robustness of Neural Networks
Chang Song
Zuoguan Wang
H. Li
AAML
39
7
0
12 Sep 2019
Structural Robustness for Deep Learning Architectures
Structural Robustness for Deep Learning Architectures
Carlos Lassance
Vincent Gripon
Jian Tang
Antonio Ortega
OOD
30
2
0
11 Sep 2019
Sparse and Imperceivable Adversarial Attacks
Sparse and Imperceivable Adversarial Attacks
Francesco Croce
Matthias Hein
AAML
39
199
0
11 Sep 2019
PDA: Progressive Data Augmentation for General Robustness of Deep Neural
  Networks
PDA: Progressive Data Augmentation for General Robustness of Deep Neural Networks
Hang Yu
Aishan Liu
Xianglong Liu
Gen Li
Ping Luo
R. Cheng
Jichen Yang
Chongzhi Zhang
AAML
34
10
0
11 Sep 2019
Localized Adversarial Training for Increased Accuracy and Robustness in
  Image Classification
Localized Adversarial Training for Increased Accuracy and Robustness in Image Classification
Eitan Rothberg
Tingting Chen
Luo Jie
Hao Ji
AAML
8
0
0
10 Sep 2019
Neural Belief Reasoner
Neural Belief Reasoner
Haifeng Qian
NAI
BDL
24
1
0
10 Sep 2019
FDA: Feature Disruptive Attack
FDA: Feature Disruptive Attack
Aditya Ganeshan
S. VivekB.
R. Venkatesh Babu
AAML
34
100
0
10 Sep 2019
TBT: Targeted Neural Network Attack with Bit Trojan
TBT: Targeted Neural Network Attack with Bit Trojan
Adnan Siraj Rakin
Zhezhi He
Deliang Fan
AAML
24
211
0
10 Sep 2019
Learning to Disentangle Robust and Vulnerable Features for Adversarial
  Detection
Learning to Disentangle Robust and Vulnerable Features for Adversarial Detection
Byunggill Joe
Sung Ju Hwang
I. Shin
AAML
16
1
0
10 Sep 2019
BOSH: An Efficient Meta Algorithm for Decision-based Attacks
BOSH: An Efficient Meta Algorithm for Decision-based Attacks
Zhenxin Xiao
Puyudi Yang
Yuchen Eleanor Jiang
Kai-Wei Chang
Cho-Jui Hsieh
AAML
24
1
0
10 Sep 2019
Improving the Explainability of Neural Sentiment Classifiers via Data
  Augmentation
Improving the Explainability of Neural Sentiment Classifiers via Data Augmentation
Hanjie Chen
Yangfeng Ji
19
9
0
10 Sep 2019
Adversarial Robustness Against the Union of Multiple Perturbation Models
Adversarial Robustness Against the Union of Multiple Perturbation Models
Pratyush Maini
Eric Wong
J. Zico Kolter
OOD
AAML
26
150
0
09 Sep 2019
When Explainability Meets Adversarial Learning: Detecting Adversarial
  Examples using SHAP Signatures
When Explainability Meets Adversarial Learning: Detecting Adversarial Examples using SHAP Signatures
Gil Fidel
Ron Bitton
A. Shabtai
FAtt
GAN
21
119
0
08 Sep 2019
On the Need for Topology-Aware Generative Models for Manifold-Based
  Defenses
On the Need for Topology-Aware Generative Models for Manifold-Based Defenses
Uyeong Jang
Susmit Jha
S. Jha
AAML
33
13
0
07 Sep 2019
Blackbox Attacks on Reinforcement Learning Agents Using Approximated
  Temporal Information
Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information
Yiren Zhao
Ilia Shumailov
Han Cui
Xitong Gao
Robert D. Mullins
Ross J. Anderson
AAML
9
28
0
06 Sep 2019
Natural Adversarial Sentence Generation with Gradient-based Perturbation
Natural Adversarial Sentence Generation with Gradient-based Perturbation
Yu-Lun Hsieh
Minhao Cheng
Da-Cheng Juan
Wei Wei
W. Hsu
Cho-Jui Hsieh
AAML
15
2
0
06 Sep 2019
Are Adversarial Robustness and Common Perturbation Robustness
  Independent Attributes ?
Are Adversarial Robustness and Common Perturbation Robustness Independent Attributes ?
Alfred Laugros
A. Caplier
Matthieu Ospici
AAML
22
40
0
04 Sep 2019
Achieving Verified Robustness to Symbol Substitutions via Interval Bound
  Propagation
Achieving Verified Robustness to Symbol Substitutions via Interval Bound Propagation
Po-Sen Huang
Robert Stanforth
Johannes Welbl
Chris Dyer
Dani Yogatama
Sven Gowal
Krishnamurthy Dvijotham
Pushmeet Kohli
AAML
22
164
0
03 Sep 2019
High Accuracy and High Fidelity Extraction of Neural Networks
High Accuracy and High Fidelity Extraction of Neural Networks
Matthew Jagielski
Nicholas Carlini
David Berthelot
Alexey Kurakin
Nicolas Papernot
MLAU
MIACV
45
372
0
03 Sep 2019
Certified Robustness to Adversarial Word Substitutions
Certified Robustness to Adversarial Word Substitutions
Robin Jia
Aditi Raghunathan
Kerem Göksel
Percy Liang
AAML
194
291
0
03 Sep 2019
Metric Learning for Adversarial Robustness
Metric Learning for Adversarial Robustness
Chengzhi Mao
Ziyuan Zhong
Junfeng Yang
Carl Vondrick
Baishakhi Ray
OOD
27
184
0
03 Sep 2019
Defeating Misclassification Attacks Against Transfer Learning
Defeating Misclassification Attacks Against Transfer Learning
Bang Wu
Shuo Wang
Xingliang Yuan
Cong Wang
Carsten Rudolph
Xiangwen Yang
AAML
24
6
0
29 Aug 2019
Deep Neural Network Ensembles against Deception: Ensemble Diversity,
  Accuracy and Robustness
Deep Neural Network Ensembles against Deception: Ensemble Diversity, Accuracy and Robustness
Ling Liu
Wenqi Wei
Ka-Ho Chow
Margaret Loper
Emre Gursoy
Stacey Truex
Yanzhao Wu
UQCV
AAML
FedML
21
59
0
29 Aug 2019
Adversarial Edit Attacks for Tree Data
Adversarial Edit Attacks for Tree Data
Benjamin Paassen
AAML
11
0
0
25 Aug 2019
Improving Adversarial Robustness via Attention and Adversarial Logit
  Pairing
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
Dou Goodman
Xingjian Li
Ji Liu
Jun Huan
Tao Wei
AAML
16
7
0
23 Aug 2019
AdvHat: Real-world adversarial attack on ArcFace Face ID system
AdvHat: Real-world adversarial attack on ArcFace Face ID system
Stepan Alekseevich Komkov
Aleksandr Petiushko
AAML
CVBM
16
283
0
23 Aug 2019
Testing Robustness Against Unforeseen Adversaries
Testing Robustness Against Unforeseen Adversaries
Maximilian Kaufmann
Daniel Kang
Yi Sun
Steven Basart
Xuwang Yin
...
Adam Dziedzic
Franziska Boenisch
Tom B. Brown
Jacob Steinhardt
Dan Hendrycks
AAML
17
0
0
21 Aug 2019
Denoising and Verification Cross-Layer Ensemble Against Black-box
  Adversarial Attacks
Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks
Ka-Ho Chow
Wenqi Wei
Yanzhao Wu
Ling Liu
AAML
25
15
0
21 Aug 2019
Saccader: Improving Accuracy of Hard Attention Models for Vision
Saccader: Improving Accuracy of Hard Attention Models for Vision
Gamaleldin F. Elsayed
Simon Kornblith
Quoc V. Le
VLM
29
72
0
20 Aug 2019
Protecting Neural Networks with Hierarchical Random Switching: Towards
  Better Robustness-Accuracy Trade-off for Stochastic Defenses
Protecting Neural Networks with Hierarchical Random Switching: Towards Better Robustness-Accuracy Trade-off for Stochastic Defenses
Tianlin Li
Siyue Wang
Pin-Yu Chen
Yanzhi Wang
Brian Kulis
Xue Lin
S. Chin
AAML
16
42
0
20 Aug 2019
Hybrid Batch Attacks: Finding Black-box Adversarial Examples with
  Limited Queries
Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries
Fnu Suya
Jianfeng Chi
David Evans
Yuan Tian
AAML
22
85
0
19 Aug 2019
Adversarial Defense by Suppressing High-frequency Components
Adversarial Defense by Suppressing High-frequency Components
Zhendong Zhang
Cheolkon Jung
X. Liang
12
24
0
19 Aug 2019
SPOCC: Scalable POssibilistic Classifier Combination -- toward robust
  aggregation of classifiers
SPOCC: Scalable POssibilistic Classifier Combination -- toward robust aggregation of classifiers
Mahmoud Albardan
John Klein
O. Colot
13
5
0
18 Aug 2019
Implicit Deep Learning
Implicit Deep Learning
L. Ghaoui
Fangda Gu
Bertrand Travacca
Armin Askari
Alicia Y. Tsai
AI4CE
39
176
0
17 Aug 2019
Nesterov Accelerated Gradient and Scale Invariance for Adversarial
  Attacks
Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
Jiadong Lin
Chuanbiao Song
Kun He
Liwei Wang
J. Hopcroft
AAML
38
555
0
17 Aug 2019
Previous
123...117118119...129130131
Next