ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.08473
  4. Cited By
Certifiably Adversarially Robust Detection of Out-of-Distribution Data
v1v2v3 (latest)

Certifiably Adversarially Robust Detection of Out-of-Distribution Data

16 July 2020
Julian Bitterwolf
Alexander Meinke
Matthias Hein
ArXiv (abs)PDFHTML

Papers citing "Certifiably Adversarially Robust Detection of Out-of-Distribution Data"

19 / 19 papers shown
Title
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
227
1,858
0
03 Mar 2020
Towards neural networks that provably know when they don't know
Towards neural networks that provably know when they don't know
Alexander Meinke
Matthias Hein
OODD
77
141
0
26 Sep 2019
Sparse and Imperceivable Adversarial Attacks
Sparse and Imperceivable Adversarial Attacks
Francesco Croce
Matthias Hein
AAML
97
199
0
11 Sep 2019
Towards Stable and Efficient Training of Verifiably Robust Neural
  Networks
Towards Stable and Efficient Training of Verifiably Robust Neural Networks
Huan Zhang
Hongge Chen
Chaowei Xiao
Sven Gowal
Robert Stanforth
Yue Liu
Duane S. Boning
Cho-Jui Hsieh
AAML
80
349
0
14 Jun 2019
Certified Adversarial Robustness via Randomized Smoothing
Certified Adversarial Robustness via Randomized Smoothing
Jeremy M. Cohen
Elan Rosenfeld
J. Zico Kolter
AAML
158
2,051
0
08 Feb 2019
Why ReLU networks yield high-confidence predictions far away from the
  training data and how to mitigate the problem
Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem
Matthias Hein
Maksym Andriushchenko
Julian Bitterwolf
OODD
170
559
0
13 Dec 2018
Deep Anomaly Detection with Outlier Exposure
Deep Anomaly Detection with Outlier Exposure
Dan Hendrycks
Mantas Mazeika
Thomas G. Dietterich
OODD
183
1,487
0
11 Dec 2018
Generalisation in humans and deep neural networks
Generalisation in humans and deep neural networks
Robert Geirhos
Carlos R. Medina Temme
Jonas Rauber
Heiko H. Schutt
Matthias Bethge
Felix Wichmann
OOD
107
609
0
27 Aug 2018
A Simple Unified Framework for Detecting Out-of-Distribution Samples and
  Adversarial Attacks
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
Kimin Lee
Kibok Lee
Honglak Lee
Jinwoo Shin
OODD
192
2,062
0
10 Jul 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
243
3,194
0
01 Feb 2018
Training Confidence-calibrated Classifiers for Detecting
  Out-of-Distribution Samples
Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples
Kimin Lee
Honglak Lee
Kibok Lee
Jinwoo Shin
OODD
120
882
0
26 Nov 2017
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning
  Algorithms
Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms
Han Xiao
Kashif Rasul
Roland Vollgraf
285
8,920
0
25 Aug 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
160
2,159
0
21 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
315
12,131
0
19 Jun 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
131
1,864
0
20 May 2017
A Baseline for Detecting Misclassified and Out-of-Distribution Examples
  in Neural Networks
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
Dan Hendrycks
Kevin Gimpel
UQCV
168
3,472
0
07 Oct 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OODAAML
268
8,583
0
16 Aug 2016
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
  Unrecognizable Images
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
Anh Totti Nguyen
J. Yosinski
Jeff Clune
AAML
171
3,274
0
05 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
282
14,963
1
21 Dec 2013
1