ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.11253
  4. Cited By
Playing it Safe: Adversarial Robustness with an Abstain Option

Playing it Safe: Adversarial Robustness with an Abstain Option

25 November 2019
Cassidy Laidlaw
Soheil Feizi
    AAML
ArXivPDFHTML

Papers citing "Playing it Safe: Adversarial Robustness with an Abstain Option"

21 / 21 papers shown
Title
Adversarial Robustness through Local Linearization
Adversarial Robustness through Local Linearization
Chongli Qin
James Martens
Sven Gowal
Dilip Krishnan
Krishnamurthy Dvijotham
Alhussein Fawzi
Soham De
Robert Stanforth
Pushmeet Kohli
AAML
52
307
0
04 Jul 2019
Functional Adversarial Attacks
Functional Adversarial Attacks
Cassidy Laidlaw
Soheil Feizi
AAML
54
184
0
29 May 2019
Controlling Neural Level Sets
Controlling Neural Level Sets
Matan Atzmon
Niv Haim
Lior Yariv
Ofer Israelov
Haggai Maron
Y. Lipman
AI4CE
35
118
0
28 May 2019
Unrestricted Adversarial Examples via Semantic Manipulation
Unrestricted Adversarial Examples via Semantic Manipulation
Anand Bhattad
Min Jin Chong
Kaizhao Liang
Yangqiu Song
David A. Forsyth
AAML
46
151
0
12 Apr 2019
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Eric Wong
Frank R. Schmidt
J. Zico Kolter
AAML
60
211
0
21 Feb 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
94
2,525
0
24 Jan 2019
Why ReLU networks yield high-confidence predictions far away from the
  training data and how to mitigate the problem
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
133
556
0
13 Dec 2018
Are adversarial examples inevitable?
Are adversarial examples inevitable?
Ali Shafahi
Wenjie Huang
Christoph Studer
Soheil Feizi
Tom Goldstein
SILM
46
282
0
06 Sep 2018
Improving DNN Robustness to Adversarial Attacks using Jacobian
  Regularization
Improving DNN Robustness to Adversarial Attacks using Jacobian Regularization
Daniel Jakubovitz
Raja Giryes
AAML
36
210
0
23 Mar 2018
Spatially Transformed Adversarial Examples
Spatially Transformed Adversarial Examples
Chaowei Xiao
Jun-Yan Zhu
Yue Liu
Warren He
M. Liu
D. Song
AAML
56
520
0
08 Jan 2018
Exploring the Landscape of Spatial Robustness
Exploring the Landscape of Spatial Robustness
Logan Engstrom
Brandon Tran
Dimitris Tsipras
Ludwig Schmidt
Aleksander Madry
AAML
64
361
0
07 Dec 2017
Robust Physical-World Attacks on Deep Learning Models
Robust Physical-World Attacks on Deep Learning Models
Kevin Eykholt
Ivan Evtimov
Earlence Fernandes
Yue Liu
Amir Rahmati
Chaowei Xiao
Atul Prakash
Tadayoshi Kohno
D. Song
AAML
37
594
0
27 Jul 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
208
11,962
0
19 Jun 2017
Detecting Adversarial Image Examples in Deep Networks with Adaptive
  Noise Reduction
Detecting Adversarial Image Examples in Deep Networks with Adaptive Noise Reduction
Bin Liang
Hongcheng Li
Miaoqiang Su
Xirong Li
Wenchang Shi
Xiaofeng Wang
AAML
73
216
0
23 May 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
101
1,851
0
20 May 2017
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
157
8,497
0
16 Aug 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
239
7,951
0
23 May 2016
DeepFool: a simple and accurate method to fool deep neural networks
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
90
4,878
0
14 Nov 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
673
149,474
0
22 Dec 2014
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
153
18,922
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
157
14,831
1
21 Dec 2013
1