Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1706.06083
Cited By
Towards Deep Learning Models Resistant to Adversarial Attacks
19 June 2017
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Deep Learning Models Resistant to Adversarial Attacks"
50 / 6,536 papers shown
Title
Robust Large-Margin Learning in Hyperbolic Space
Melanie Weber
Manzil Zaheer
A. S. Rawat
A. Menon
Sanjiv Kumar
66
33
0
11 Apr 2020
Luring of transferable adversarial perturbations in the black-box paradigm
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
31
2
0
10 Apr 2020
Blind Adversarial Pruning: Balance Accuracy, Efficiency and Robustness
Haidong Xie
Lixin Qian
Xueshuang Xiang
Naijin Liu
AAML
28
1
0
10 Apr 2020
Blind Adversarial Training: Balance Accuracy and Robustness
Haidong Xie
Xueshuang Xiang
Naijin Liu
Bin Dong
AAML
14
2
0
10 Apr 2020
Rethinking the Trigger of Backdoor Attack
Yiming Li
Tongqing Zhai
Baoyuan Wu
Yong Jiang
Zhifeng Li
Shutao Xia
LLMSV
24
148
0
09 Apr 2020
Transferable, Controllable, and Inconspicuous Adversarial Attacks on Person Re-identification With Deep Mis-Ranking
Hongjun Wang
Guangrun Wang
Ya Li
Dongyu Zhang
Liang Lin
AAML
27
83
0
08 Apr 2020
Learning to fool the speaker recognition
Jiguo Li
Xinfeng Zhang
Jizheng Xu
Li Zhang
Y. Wang
Siwei Ma
Wen Gao
AAML
30
21
0
07 Apr 2020
Approximate Manifold Defense Against Multiple Adversarial Perturbations
Jay Nandy
Wynne Hsu
Mong Li Lee
AAML
12
12
0
05 Apr 2020
SimAug: Learning Robust Representations from Simulation for Trajectory Prediction
Junwei Liang
Lu Jiang
Alexander G. Hauptmann
36
18
0
04 Apr 2020
Understanding (Non-)Robust Feature Disentanglement and the Relationship Between Low- and High-Dimensional Adversarial Attacks
Zuowen Wang
Leo Horne
AAML
16
0
0
04 Apr 2020
SOAR: Second-Order Adversarial Regularization
A. Ma
Fartash Faghri
Nicolas Papernot
Amir-massoud Farahmand
AAML
21
4
0
04 Apr 2020
Evading Deepfake-Image Detectors with White- and Black-Box Attacks
Nicholas Carlini
Hany Farid
AAML
18
147
0
01 Apr 2020
Physically Realizable Adversarial Examples for LiDAR Object Detection
James Tu
Mengye Ren
S. Manivasagam
Ming Liang
Binh Yang
Richard Du
Frank Cheng
R. Urtasun
3DPC
28
238
0
01 Apr 2020
Towards Achieving Adversarial Robustness by Enforcing Feature Consistency Across Bit Planes
Sravanti Addepalli
S. VivekB.
Arya Baburaj
Gaurang Sriramanan
R. Venkatesh Babu
AAML
14
32
0
01 Apr 2020
MetaPoison: Practical General-purpose Clean-label Data Poisoning
Wenjie Huang
Jonas Geiping
Liam H. Fowl
Gavin Taylor
Tom Goldstein
41
188
0
01 Apr 2020
When the Guard failed the Droid: A case study of Android malware
Harel Berger
Chen Hajaj
A. Dvir
AAML
30
7
0
31 Mar 2020
Inverting Gradients -- How easy is it to break privacy in federated learning?
Jonas Geiping
Hartmut Bauermeister
Hannah Dröge
Michael Moeller
FedML
51
1,205
0
31 Mar 2020
A Thorough Comparison Study on Adversarial Attacks and Defenses for Common Thorax Disease Classification in Chest X-rays
Ch. Srinivasa Rao
Jingyun Liang
Runhao Zeng
Qi Chen
Huazhu Fu
Yanwu Xu
Mingkui Tan
AAML
16
7
0
31 Mar 2020
Adversarial Attacks on Multivariate Time Series
Samuel Harford
Fazle Karim
H. Darabi
AI4TS
AAML
22
21
0
31 Mar 2020
DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit Flips
Fan Yao
Adnan Siraj Rakin
Deliang Fan
AAML
23
156
0
30 Mar 2020
Improved Gradient based Adversarial Attacks for Quantized Networks
Kartik Gupta
Thalaiyasingam Ajanthan
MQ
26
19
0
30 Mar 2020
Towards Deep Learning Models Resistant to Large Perturbations
Amirreza Shaeiri
Rozhin Nobahari
M. Rohban
OOD
AAML
39
12
0
30 Mar 2020
Learning to Learn Single Domain Generalization
Fengchun Qiao
Long Zhao
Xi Peng
OOD
100
435
0
30 Mar 2020
Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
Tianlong Chen
Sijia Liu
Shiyu Chang
Yu Cheng
Lisa Amini
Zhangyang Wang
AAML
18
247
0
28 Mar 2020
Adversarial Imitation Attack
Mingyi Zhou
Jing Wu
Yipeng Liu
Xiaolin Huang
Shuaicheng Liu
Xiang Zhang
Ce Zhu
AAML
30
0
0
28 Mar 2020
DaST: Data-free Substitute Training for Adversarial Attacks
Mingyi Zhou
Jing Wu
Yipeng Liu
Shuaicheng Liu
Ce Zhu
30
142
0
28 Mar 2020
A Hybrid-Order Distributed SGD Method for Non-Convex Optimization to Balance Communication Overhead, Computational Complexity, and Convergence Rate
Naeimeh Omidvar
M. Maddah-ali
Hamed Mahdavi
ODL
35
3
0
27 Mar 2020
A copula-based visualization technique for a neural network
Y. Kubo
Yuto Komori
T. Okuyama
Hiroshi Tokieda
FAtt
22
0
0
27 Mar 2020
A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks
Samuel Deng
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
Abhradeep Thakurta
20
3
0
26 Mar 2020
Do Deep Minds Think Alike? Selective Adversarial Attacks for Fine-Grained Manipulation of Multiple Deep Neural Networks
Zain Khan
Jirong Yi
R. Mudumbai
Xiaodong Wu
Weiyu Xu
AAML
MLAU
22
1
0
26 Mar 2020
Volumization as a Natural Generalization of Weight Decay
Liu Ziyin
Zihao Wang
M. Yamada
Masahito Ueda
AI4CE
16
0
0
25 Mar 2020
Stochastic Zeroth-order Riemannian Derivative Estimation and Optimization
Jiaxiang Li
Krishnakumar Balasubramanian
Shiqian Ma
12
5
0
25 Mar 2020
Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study
Luan Nguyen
Sunpreet S. Arora
Yuhang Wu
Hao Yang
AAML
25
88
0
24 Mar 2020
Defense Through Diverse Directions
Christopher M. Bender
Yang Li
Yifeng Shi
Michael K. Reiter
Junier B. Oliva
AAML
16
4
0
24 Mar 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
198
365
0
24 Mar 2020
Inherent Adversarial Robustness of Deep Spiking Neural Networks: Effects of Discrete Input Encoding and Non-Linear Activations
Saima Sharmin
Nitin Rathi
Priyadarshini Panda
Kaushik Roy
AAML
118
87
0
23 Mar 2020
Adversarial Attacks on Monocular Depth Estimation
Ziqi Zhang
Xinge Zhu
Yingwei Li
Xiangqun Chen
Yao Guo
AAML
MDE
36
25
0
23 Mar 2020
Sample-Specific Output Constraints for Neural Networks
Mathis Brosowsky
Olaf Dünkel
Daniel Slieter
Marius Zöllner
AILaw
PINN
59
10
0
23 Mar 2020
ARDA: Automatic Relational Data Augmentation for Machine Learning
Nadiia Chepurko
Ryan Marcus
Emanuel Zgraggen
Raul Castro Fernandez
Tim Kraska
David R Karger
24
16
0
21 Mar 2020
Robust Out-of-distribution Detection for Neural Networks
Jiefeng Chen
Yixuan Li
Xi Wu
Yingyu Liang
S. Jha
OODD
168
85
0
21 Mar 2020
Adversarial Robustness on In- and Out-Distribution Improves Explainability
Maximilian Augustin
Alexander Meinke
Matthias Hein
OOD
96
101
0
20 Mar 2020
Quantum noise protects quantum classifiers against adversaries
Yuxuan Du
Min-hsiu Hsieh
Tongliang Liu
Dacheng Tao
Nana Liu
AAML
27
110
0
20 Mar 2020
One Neuron to Fool Them All
Anshuman Suri
David Evans
AAML
19
4
0
20 Mar 2020
Robust Deep Reinforcement Learning against Adversarial Perturbations on State Observations
Huan Zhang
Hongge Chen
Chaowei Xiao
Yue Liu
Mingyan D. Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
61
262
0
19 Mar 2020
Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates
Amin Ghiasi
Ali Shafahi
Tom Goldstein
42
55
0
19 Mar 2020
Overinterpretation reveals image classification model pathologies
Brandon Carter
Siddhartha Jain
Jonas W. Mueller
David K Gifford
FAtt
61
50
0
19 Mar 2020
RAB: Provable Robustness Against Backdoor Attacks
Maurice Weber
Xiaojun Xu
Bojan Karlas
Ce Zhang
Yue Liu
AAML
27
161
0
19 Mar 2020
Face-Off: Adversarial Face Obfuscation
Varun Chandrasekaran
Chuhan Gao
Brian Tang
Kassem Fawaz
S. Jha
Suman Banerjee
PICV
39
44
0
19 Mar 2020
SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing
Chawin Sitawarin
S. Chakraborty
David Wagner
AAML
25
38
0
18 Mar 2020
Vulnerabilities of Connectionist AI Applications: Evaluation and Defence
Christian Berghoff
Matthias Neu
Arndt von Twickel
AAML
52
23
0
18 Mar 2020
Previous
1
2
3
...
109
110
111
...
129
130
131
Next