ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.15497
  4. Cited By
On the Detection of Anomalous or Out-Of-Distribution Data in Vision
  Models Using Statistical Techniques

On the Detection of Anomalous or Out-Of-Distribution Data in Vision Models Using Statistical Techniques

21 March 2024
Laura O'Mahony
David JP O'Sullivan
Nikola S. Nikolov
ArXiv (abs)PDFHTML

Papers citing "On the Detection of Anomalous or Out-Of-Distribution Data in Vision Models Using Statistical Techniques"

12 / 12 papers shown
Title
Detection of out-of-distribution samples using binary neuron activation
  patterns
Detection of out-of-distribution samples using binary neuron activation patterns
Bartlomiej Olber
Krystian Radlak
A. Popowicz
Michal Szczepankiewicz
K. Chachula
OODD
44
17
0
29 Dec 2022
Out-of-Distribution Detection with Class Ratio Estimation
Out-of-Distribution Detection with Class Ratio Estimation
Mingtian Zhang
Andi Zhang
Tim Z. Xiao
Yitong Sun
Jingyu Sun
OODD
54
5
0
08 Jun 2022
Rethinking Neural Networks With Benford's Law
Rethinking Neural Networks With Benford's Law
Surya Kant Sahu
Abhinav Java
Arshad Shaikh
Yannic Kilcher
29
2
0
05 Feb 2021
WILDS: A Benchmark of in-the-Wild Distribution Shifts
WILDS: A Benchmark of in-the-Wild Distribution Shifts
Pang Wei Koh
Shiori Sagawa
Henrik Marklund
Sang Michael Xie
Marvin Zhang
...
A. Kundaje
Emma Pierson
Sergey Levine
Chelsea Finn
Percy Liang
OOD
230
1,445
0
14 Dec 2020
Understanding Anomaly Detection with Deep Invertible Networks through
  Hierarchies of Distributions and Features
Understanding Anomaly Detection with Deep Invertible Networks through Hierarchies of Distributions and Features
R. Schirrmeister
Yuxuan Zhou
T. Ball
Dan Zhang
UQCV
77
88
0
18 Jun 2020
Outside the Box: Abstraction-Based Monitoring of Neural Networks
Outside the Box: Abstraction-Based Monitoring of Neural Networks
T. Henzinger
Anna Lukina
Christian Schilling
AAML
75
59
0
20 Nov 2019
Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an
  Early-Layer Output
Detecting Out-of-Distribution Inputs in Deep Neural Networks Using an Early-Layer Output
Vahdat Abdelzad
Krzysztof Czarnecki
Rick Salay
Taylor Denouden
Sachin Vernekar
Buu Phan
OODD
54
47
0
23 Oct 2019
Deep Anomaly Detection with Outlier Exposure
Deep Anomaly Detection with Outlier Exposure
Dan Hendrycks
Mantas Mazeika
Thomas G. Dietterich
OODD
183
1,487
0
11 Dec 2018
A Baseline for Detecting Misclassified and Out-of-Distribution Examples
  in Neural Networks
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
Dan Hendrycks
Kevin Gimpel
UQCV
171
3,480
0
07 Oct 2016
Concrete Problems in AI Safety
Concrete Problems in AI Safety
Dario Amodei
C. Olah
Jacob Steinhardt
Paul Christiano
John Schulman
Dandelion Mané
248
2,404
0
21 Jun 2016
Deep Neural Networks are Easily Fooled: High Confidence Predictions for
  Unrecognizable Images
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images
Anh Totti Nguyen
J. Yosinski
Jeff Clune
AAML
174
3,275
0
05 Dec 2014
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLMObjD
1.7K
39,615
0
01 Sep 2014
1