ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.15104
  4. Cited By
Deciphering the Definition of Adversarial Robustness for post-hoc OOD Detectors
v1v2v3v4v5 (latest)

Deciphering the Definition of Adversarial Robustness for post-hoc OOD Detectors

21 June 2024
Peter Lorenz
Mario Fernandez
Jens Müller
Ullrich Kothe
    AAML
ArXiv (abs)PDFHTML

Papers citing "Deciphering the Definition of Adversarial Robustness for post-hoc OOD Detectors"

50 / 102 papers shown
Title
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Ze Liu
Yutong Lin
Yue Cao
Han Hu
Yixuan Wei
Zheng Zhang
Stephen Lin
B. Guo
ViT
453
21,439
0
25 Mar 2021
Learning Transferable Visual Models From Natural Language Supervision
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIPVLM
931
29,436
0
26 Feb 2021
RepVGG: Making VGG-style ConvNets Great Again
RepVGG: Making VGG-style ConvNets Great Again
Xiaohan Ding
Xinming Zhang
Ningning Ma
Jungong Han
Guiguang Ding
Jian Sun
284
1,599
0
11 Jan 2021
Training data-efficient image transformers & distillation through
  attention
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
387
6,768
0
23 Dec 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
657
41,103
0
22 Oct 2020
RobustBench: a standardized adversarial robustness benchmark
RobustBench: a standardized adversarial robustness benchmark
Francesco Croce
Maksym Andriushchenko
Vikash Sehwag
Edoardo Debenedetti
Nicolas Flammarion
M. Chiang
Prateek Mittal
Matthias Hein
VLM
322
702
0
19 Oct 2020
Weight-Covariance Alignment for Adversarially Robust Neural Networks
Weight-Covariance Alignment for Adversarially Robust Neural Networks
Panagiotis Eustratiadis
Henry Gouk
Da Li
Timothy M. Hospedales
OODAAML
61
23
0
17 Oct 2020
Energy-based Out-of-distribution Detection
Energy-based Out-of-distribution Detection
Weitang Liu
Xiaoyun Wang
John Douglas Owens
Yixuan Li
OODD
271
1,356
0
08 Oct 2020
Open-set Adversarial Defense
Open-set Adversarial Defense
Rui Shao
Pramuditha Perera
Pong C. Yuen
Vishal M. Patel
AAML
105
32
0
02 Sep 2020
A simple defense against adversarial attacks on heatmap explanations
A simple defense against adversarial attacks on heatmap explanations
Laura Rieger
Lars Kai Hansen
FAttAAML
76
37
0
13 Jul 2020
A Critical Evaluation of Open-World Machine Learning
A Critical Evaluation of Open-World Machine Learning
Liwei Song
Vikash Sehwag
A. Bhagoji
Prateek Mittal
AAML
46
9
0
08 Jul 2020
Measuring Robustness to Natural Distribution Shifts in Image
  Classification
Measuring Robustness to Natural Distribution Shifts in Image Classification
Rohan Taori
Achal Dave
Vaishaal Shankar
Nicholas Carlini
Benjamin Recht
Ludwig Schmidt
OOD
117
546
0
01 Jul 2020
ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining
ATOM: Robustifying Out-of-distribution Detection Using Outlier Mining
Jiefeng Chen
Yixuan Li
Xi Wu
Yingyu Liang
S. Jha
OODD
50
139
0
26 Jun 2020
Towards Understanding Fast Adversarial Training
Towards Understanding Fast Adversarial Training
Bai Li
Shiqi Wang
Suman Jana
Lawrence Carin
AAML
66
50
0
04 Jun 2020
Designing Network Design Spaces
Designing Network Design Spaces
Ilija Radosavovic
Raj Prateek Kosaraju
Ross B. Girshick
Kaiming He
Piotr Dollár
GNN
102
1,682
0
30 Mar 2020
TResNet: High Performance GPU-Dedicated Architecture
TResNet: High Performance GPU-Dedicated Architecture
T. Ridnik
Hussam Lawen
Asaf Noy
Emanuel Ben-Baruch
Gilad Sharir
Itamar Friedman
OOD
78
213
0
30 Mar 2020
Robust Out-of-distribution Detection for Neural Networks
Robust Out-of-distribution Detection for Neural Networks
Jiefeng Chen
Yixuan Li
Xi Wu
Yingyu Liang
S. Jha
OODD
192
87
0
21 Mar 2020
Reliable evaluation of adversarial robustness with an ensemble of
  diverse parameter-free attacks
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
221
1,846
0
03 Mar 2020
Why is the Mahalanobis Distance Effective for Anomaly Detection?
Why is the Mahalanobis Distance Effective for Anomaly Detection?
Ryo Kamoi
Kei Kobayashi
OODD
169
58
0
01 Mar 2020
On Adaptive Attacks to Adversarial Example Defenses
On Adaptive Attacks to Adversarial Example Defenses
Florian Tramèr
Nicholas Carlini
Wieland Brendel
Aleksander Madry
AAML
277
834
0
19 Feb 2020
Adversarial Distributional Training for Robust Deep Learning
Adversarial Distributional Training for Robust Deep Learning
Yinpeng Dong
Zhijie Deng
Tianyu Pang
Hang Su
Jun Zhu
OOD
71
122
0
14 Feb 2020
Big Transfer (BiT): General Visual Representation Learning
Big Transfer (BiT): General Visual Representation Learning
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
J. Puigcerver
Jessica Yung
Sylvain Gelly
N. Houlsby
MQ
286
1,205
0
24 Dec 2019
Scaling Out-of-Distribution Detection for Real-World Settings
Scaling Out-of-Distribution Detection for Real-World Settings
Dan Hendrycks
Steven Basart
Mantas Mazeika
Andy Zou
Joe Kwon
Mohammadreza Mostajabi
Jacob Steinhardt
Basel Alomair
OODD
180
481
0
25 Nov 2019
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural
  Networks
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks
Mehdi Neshat
Zifan Wang
Bradley Alexander
Fan Yang
Zijian Zhang
Sirui Ding
Markus Wagner
Xia Hu
FAtt
93
1,069
0
03 Oct 2019
Natural Adversarial Examples
Natural Adversarial Examples
Dan Hendrycks
Kevin Zhao
Steven Basart
Jacob Steinhardt
Basel Alomair
OODD
212
1,472
0
16 Jul 2019
Fixing the train-test resolution discrepancy
Fixing the train-test resolution discrepancy
Hugo Touvron
Andrea Vedaldi
Matthijs Douze
Hervé Jégou
119
423
0
14 Jun 2019
Better the Devil you Know: An Analysis of Evasion Attacks using
  Out-of-Distribution Adversarial Examples
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples
Vikash Sehwag
A. Bhagoji
Liwei Song
Chawin Sitawarin
Daniel Cullina
M. Chiang
Prateek Mittal
OODD
65
26
0
05 May 2019
Benchmarking Neural Network Robustness to Common Corruptions and
  Perturbations
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
Dan Hendrycks
Thomas G. Dietterich
OODVLM
177
3,435
0
28 Mar 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELMAAML
89
901
0
18 Feb 2019
Attention, Please! Adversarial Defense via Activation Rectification and
  Preservation
Attention, Please! Adversarial Defense via Activation Rectification and Preservation
Shangxi Wu
Jitao Sang
Kaiyuan Xu
Jiaming Zhang
Jian Yu
AAML
26
7
0
24 Nov 2018
A Simple Unified Framework for Detecting Out-of-Distribution Samples and
  Adversarial Attacks
A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks
Kimin Lee
Kibok Lee
Honglak Lee
Jinwoo Shin
OODD
187
2,051
0
10 Jul 2018
Obfuscated Gradients Give a False Sense of Security: Circumventing
  Defenses to Adversarial Examples
Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples
Anish Athalye
Nicholas Carlini
D. Wagner
AAML
230
3,186
0
01 Feb 2018
MobileNetV2: Inverted Residuals and Linear Bottlenecks
MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler
Andrew G. Howard
Menglong Zhu
A. Zhmoginov
Liang-Chieh Chen
184
19,284
0
13 Jan 2018
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
157
2,153
0
21 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
310
12,069
0
19 Jun 2017
Enhancing The Reliability of Out-of-distribution Image Detection in
  Neural Networks
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
Shiyu Liang
Yixuan Li
R. Srikant
UQCVOODD
168
2,072
0
08 Jun 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
126
1,864
0
20 May 2017
Grad-CAM: Why did you say that?
Grad-CAM: Why did you say that?
Ramprasaath R. Selvaraju
Abhishek Das
Ramakrishna Vedantam
Michael Cogswell
Devi Parikh
Dhruv Batra
FAtt
70
473
0
22 Nov 2016
Delving into Transferable Adversarial Examples and Black-box Attacks
Delving into Transferable Adversarial Examples and Black-box Attacks
Yanpei Liu
Xinyun Chen
Chang-rui Liu
Basel Alomair
AAML
140
1,737
0
08 Nov 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
321
20,023
0
07 Oct 2016
A Baseline for Detecting Misclassified and Out-of-Distribution Examples
  in Neural Networks
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
Dan Hendrycks
Kevin Gimpel
UQCV
164
3,454
0
07 Oct 2016
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN3DV
775
36,813
0
25 Aug 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OODAAML
266
8,555
0
16 Aug 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILMAAML
543
5,897
0
08 Jul 2016
Wide Residual Networks
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
349
7,985
0
23 May 2016
Identity Mappings in Deep Residual Networks
Identity Mappings in Deep Residual Networks
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
354
10,184
0
16 Mar 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAUAAML
75
3,678
0
08 Feb 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSLSSegFAtt
250
9,326
0
14 Dec 2015
Deep Residual Learning for Image Recognition
Deep Residual Learning for Image Recognition
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
MedIm
2.2K
194,020
0
10 Dec 2015
DeepFool: a simple and accurate method to fool deep neural networks
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
151
4,897
0
14 Nov 2015
Previous
123
Next