Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2410.19352
Cited By
Interpreting Neural Networks through Mahalanobis Distance
25 October 2024
Alan Oursland
FAtt
MILM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Interpreting Neural Networks through Mahalanobis Distance"
12 / 12 papers shown
Title
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
219
1,850
0
30 Nov 2017
Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions
Oscar Li
Hao Liu
Chaofan Chen
Cynthia Rudin
178
592
0
13 Oct 2017
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,018
0
22 May 2017
Gaussian Error Linear Units (GELUs)
Dan Hendrycks
Kevin Gimpel
174
5,042
0
27 Jun 2016
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,706
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
17,033
0
16 Feb 2016
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
Djork-Arné Clevert
Thomas Unterthiner
Sepp Hochreiter
305
5,534
0
23 Nov 2015
Weight Uncertainty in Neural Networks
Charles Blundell
Julien Cornebise
Koray Kavukcuoglu
Daan Wierstra
UQCV
BDL
192
1,892
0
20 May 2015
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
465
43,341
0
11 Feb 2015
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
338
18,651
0
06 Feb 2015
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
314
7,316
0
20 Dec 2013
A Tutorial on Spectral Clustering
U. V. Luxburg
290
10,543
0
01 Nov 2007
1