ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1901.08558
  4. Cited By
Quantifying Interpretability and Trust in Machine Learning Systems

Quantifying Interpretability and Trust in Machine Learning Systems

20 January 2019
Philipp Schmidt
F. Biessmann
ArXiv (abs)PDFHTML

Papers citing "Quantifying Interpretability and Trust in Machine Learning Systems"

16 / 16 papers shown
Title
Less is More: The Influence of Pruning on the Explainability of CNNs
Less is More: The Influence of Pruning on the Explainability of CNNs
David Weber
F. Merkle
Pascal Schöttle
Stephan Schlögl
Martin Nocker
FAtt
173
1
0
17 Feb 2023
The Promise and Peril of Human Evaluation for Model Interpretability
Bernease Herman
69
144
0
20 Nov 2017
The Doctor Just Won't Accept That!
The Doctor Just Won't Accept That!
Zachary Chase Lipton
FaML
63
101
0
20 Nov 2017
Explanation in Artificial Intelligence: Insights from the Social
  Sciences
Explanation in Artificial Intelligence: Insights from the Social Sciences
Tim Miller
XAI
254
4,281
0
22 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,090
0
22 May 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
219
2,910
0
14 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAIFaML
410
3,820
0
28 Feb 2017
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
183
3,708
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,071
0
16 Feb 2016
Explaining NonLinear Classification Decisions with Deep Taylor
  Decomposition
Explaining NonLinear Classification Decisions with Deep Taylor Decomposition
G. Montavon
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
Klaus-Robert Muller
FAtt
68
739
0
08 Dec 2015
Interpretable classifiers using rules and Bayesian analysis: Building a
  better stroke prediction model
Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model
Benjamin Letham
Cynthia Rudin
Tyler H. McCormick
D. Madigan
FAtt
72
743
0
05 Nov 2015
Evaluating the visualization of what a Deep Neural Network has learned
Evaluating the visualization of what a Deep Neural Network has learned
Wojciech Samek
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
XAI
139
1,200
0
21 Sep 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
282
19,129
0
20 Dec 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
314
7,321
0
20 Dec 2013
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAttSSL
603
15,907
0
12 Nov 2013
The Feature Importance Ranking Measure
The Feature Importance Ranking Measure
A. Zien
Nicole Krämer
Soeren Sonnenburg
Gunnar Rätsch
FAtt
102
147
0
23 Jun 2009
1