ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1702.08608
  4. Cited By
Towards A Rigorous Science of Interpretable Machine Learning

Towards A Rigorous Science of Interpretable Machine Learning

28 February 2017
Finale Doshi-Velez
Been Kim
    XAI
    FaML
ArXivPDFHTML

Papers citing "Towards A Rigorous Science of Interpretable Machine Learning"

50 / 403 papers shown
Title
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
37
169
0
13 Jan 2021
FastIF: Scalable Influence Functions for Efficient Model Interpretation
  and Debugging
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging
Han Guo
Nazneen Rajani
Peter Hase
Mohit Bansal
Caiming Xiong
TDI
19
102
0
31 Dec 2020
Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting
  Data Scientists in Training Fair Models
Fairkit, Fairkit, on the Wall, Who's the Fairest of Them All? Supporting Data Scientists in Training Fair Models
Brittany Johnson
Jesse Bartola
Rico Angell
Katherine Keith
Sam Witty
S. Giguere
Yuriy Brun
FaML
14
18
0
17 Dec 2020
Developing Future Human-Centered Smart Cities: Critical Analysis of
  Smart City Security, Interpretability, and Ethical Challenges
Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges
Kashif Ahmad
Majdi Maabreh
M. Ghaly
Khalil Khan
Junaid Qadir
Ala I. Al-Fuqaha
19
142
0
14 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
16
18
0
10 Dec 2020
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Neural Prototype Trees for Interpretable Fine-grained Image Recognition
Meike Nauta
Ron van Bree
C. Seifert
66
261
0
03 Dec 2020
Interpretability and Explainability: A Machine Learning Zoo Mini-tour
Interpretability and Explainability: A Machine Learning Zoo Mini-tour
Ricards Marcinkevics
Julia E. Vogt
XAI
16
119
0
03 Dec 2020
Cross-Loss Influence Functions to Explain Deep Network Representations
Cross-Loss Influence Functions to Explain Deep Network Representations
Andrew Silva
Rohit Chopra
Matthew C. Gombolay
TDI
13
15
0
03 Dec 2020
Quantifying Explainers of Graph Neural Networks in Computational
  Pathology
Quantifying Explainers of Graph Neural Networks in Computational Pathology
Guillaume Jaume
Pushpak Pati
Behzad Bozorgtabar
Antonio Foncubierta-Rodríguez
Florinda Feroce
A. Anniciello
T. Rau
Jean-Philippe Thiran
M. Gabrani
O. Goksel
FAtt
13
76
0
25 Nov 2020
Robust and Stable Black Box Explanations
Robust and Stable Black Box Explanations
Himabindu Lakkaraju
Nino Arsov
Osbert Bastani
AAML
FAtt
11
84
0
12 Nov 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
34
7
0
23 Oct 2020
A Perspective on Machine Learning Methods in Turbulence Modelling
A Perspective on Machine Learning Methods in Turbulence Modelling
Andrea Beck
Marius Kurz
AI4CE
45
101
0
23 Oct 2020
A Survey on Deep Learning and Explainability for Automatic Report
  Generation from Medical Images
A Survey on Deep Learning and Explainability for Automatic Report Generation from Medical Images
Pablo Messina
Pablo Pino
Denis Parra
Alvaro Soto
Cecilia Besa
S. Uribe
Marcelo andía
C. Tejos
Claudia Prieto
Daniel Capurro
MedIm
28
62
0
20 Oct 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and
  Challenges
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
10
397
0
19 Oct 2020
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted
  Decision-making
Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making
Charvi Rastogi
Yunfeng Zhang
Dennis L. Wei
Kush R. Varshney
Amit Dhurandhar
Richard J. Tomsett
HAI
27
108
0
15 Oct 2020
PGM-Explainer: Probabilistic Graphical Model Explanations for Graph
  Neural Networks
PGM-Explainer: Probabilistic Graphical Model Explanations for Graph Neural Networks
Minh Nhat Vu
My T. Thai
BDL
8
327
0
12 Oct 2020
A Series of Unfortunate Counterfactual Events: the Role of Time in
  Counterfactual Explanations
A Series of Unfortunate Counterfactual Events: the Role of Time in Counterfactual Explanations
Andrea Ferrario
M. Loi
17
5
0
09 Oct 2020
Interpretable Machine Learning for COVID-19: An Empirical Study on
  Severity Prediction Task
Interpretable Machine Learning for COVID-19: An Empirical Study on Severity Prediction Task
Han-Ching Wu
Wenjie Ruan
Jiangtao Wang
Dingchang Zheng
Bei Liu
...
Xiangfei Chai
Jian Chen
Kunwei Li
Shaolin Li
A. Helal
27
25
0
30 Sep 2020
Beyond Individualized Recourse: Interpretable and Interactive Summaries
  of Actionable Recourses
Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses
Kaivalya Rawal
Himabindu Lakkaraju
25
11
0
15 Sep 2020
A Game-Based Approach for Helping Designers Learn Machine Learning
  Concepts
A Game-Based Approach for Helping Designers Learn Machine Learning Concepts
Chelsea M. Myers
Jiachi Xie
Jichen Zhu
11
4
0
11 Sep 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
27
62
0
11 Sep 2020
Play MNIST For Me! User Studies on the Effects of Post-Hoc,
  Example-Based Explanations & Error Rates on Debugging a Deep Learning,
  Black-Box Classifier
Play MNIST For Me! User Studies on the Effects of Post-Hoc, Example-Based Explanations & Error Rates on Debugging a Deep Learning, Black-Box Classifier
Courtney Ford
Eoin M. Kenny
Mark T. Keane
15
6
0
10 Sep 2020
Explainable Artificial Intelligence for Process Mining: A General
  Overview and Application of a Novel Local Explanation Approach for Predictive
  Process Monitoring
Explainable Artificial Intelligence for Process Mining: A General Overview and Application of a Novel Local Explanation Approach for Predictive Process Monitoring
Nijat Mehdiyev
Peter Fettke
AI4TS
25
55
0
04 Sep 2020
Query Understanding via Intent Description Generation
Query Understanding via Intent Description Generation
Ruqing Zhang
Jiafeng Guo
Yixing Fan
Yanyan Lan
Xueqi Cheng
19
17
0
25 Aug 2020
Quantum Language Model with Entanglement Embedding for Question
  Answering
Quantum Language Model with Entanglement Embedding for Question Answering
Yiwei Chen
Yu Pan
D. Dong
20
31
0
23 Aug 2020
Explainable Predictive Process Monitoring
Explainable Predictive Process Monitoring
Musabir Musabayli
F. Maggi
Williams Rizzi
Josep Carmona
Chiara Di Francescomarino
6
60
0
04 Aug 2020
Evaluating the performance of the LIME and Grad-CAM explanation methods
  on a LEGO multi-label image classification task
Evaluating the performance of the LIME and Grad-CAM explanation methods on a LEGO multi-label image classification task
David Cian
J. C. V. Gemert
A. Lengyel
FAtt
11
22
0
04 Aug 2020
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest
  Feature Importance
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance
Mattia Carletti
M. Terzi
Gian Antonio Susto
23
42
0
21 Jul 2020
Sequential Explanations with Mental Model-Based Policies
Sequential Explanations with Mental Model-Based Policies
A. Yeung
Shalmali Joshi
Joseph Jay Williams
Frank Rudzicz
FAtt
LRM
26
15
0
17 Jul 2020
Learning Reasoning Strategies in End-to-End Differentiable Proving
Learning Reasoning Strategies in End-to-End Differentiable Proving
Pasquale Minervini
Sebastian Riedel
Pontus Stenetorp
Edward Grefenstette
Tim Rocktaschel
LRM
37
96
0
13 Jul 2020
Algorithmic Fairness in Education
Algorithmic Fairness in Education
René F. Kizilcec
Hansol Lee
FaML
35
119
0
10 Jul 2020
Drug discovery with explainable artificial intelligence
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
25
625
0
01 Jul 2020
BERTology Meets Biology: Interpreting Attention in Protein Language
  Models
BERTology Meets Biology: Interpreting Attention in Protein Language Models
Jesse Vig
Ali Madani
L. Varshney
Caiming Xiong
R. Socher
Nazneen Rajani
15
288
0
26 Jun 2020
DeltaGrad: Rapid retraining of machine learning models
DeltaGrad: Rapid retraining of machine learning models
Yinjun Wu
Edgar Dobriban
S. Davidson
MU
11
194
0
26 Jun 2020
Interpretable Deep Models for Cardiac Resynchronisation Therapy Response
  Prediction
Interpretable Deep Models for Cardiac Resynchronisation Therapy Response Prediction
Esther Puyol-Antón
C. L. P. Chen
J. Clough
B. Ruijsink
B. Sidhu
...
M. Elliott
Vishal S. Mehta
Daniel Rueckert
C. Rinaldi
A. King
9
32
0
24 Jun 2020
Fair Performance Metric Elicitation
Fair Performance Metric Elicitation
G. Hiranandani
Harikrishna Narasimhan
Oluwasanmi Koyejo
24
18
0
23 Jun 2020
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Y. Alufaisan
L. Marusich
J. Bakdash
Yan Zhou
Murat Kantarcioglu
XAI
14
93
0
19 Jun 2020
Detecting unusual input to neural networks
Detecting unusual input to neural networks
Jörg Martin
Clemens Elster
AAML
9
7
0
15 Jun 2020
Getting a CLUE: A Method for Explaining Uncertainty Estimates
Getting a CLUE: A Method for Explaining Uncertainty Estimates
Javier Antorán
Umang Bhatt
T. Adel
Adrian Weller
José Miguel Hernández-Lobato
UQCV
BDL
32
111
0
11 Jun 2020
How Interpretable and Trustworthy are GAMs?
How Interpretable and Trustworthy are GAMs?
C. Chang
S. Tan
Benjamin J. Lengerich
Anna Goldenberg
R. Caruana
FAtt
8
76
0
11 Jun 2020
Interpretable Deep Graph Generation with Node-Edge Co-Disentanglement
Interpretable Deep Graph Generation with Node-Edge Co-Disentanglement
Xiaojie Guo
Liang Zhao
Zhao Qin
Lingfei Wu
Amarda Shehu
Yanfang Ye
CoGe
DRL
32
46
0
09 Jun 2020
XGNN: Towards Model-Level Explanations of Graph Neural Networks
XGNN: Towards Model-Level Explanations of Graph Neural Networks
Haonan Yuan
Jiliang Tang
Xia Hu
Shuiwang Ji
21
389
0
03 Jun 2020
A Performance-Explainability Framework to Benchmark Machine Learning
  Methods: Application to Multivariate Time Series Classifiers
A Performance-Explainability Framework to Benchmark Machine Learning Methods: Application to Multivariate Time Series Classifiers
Kevin Fauvel
Véronique Masson
Elisa Fromont
AI4TS
31
17
0
29 May 2020
Local and Global Explanations of Agent Behavior: Integrating Strategy
  Summaries with Saliency Maps
Local and Global Explanations of Agent Behavior: Integrating Strategy Summaries with Saliency Maps
Tobias Huber
Katharina Weitz
Elisabeth André
Ofra Amir
FAtt
16
63
0
18 May 2020
Clinical Predictive Models for COVID-19: Systematic Study
Clinical Predictive Models for COVID-19: Systematic Study
Patrick Schwab
August DuMont Schütte
Benedikt Dietz
Stefan Bauer
OOD
ELM
40
35
0
17 May 2020
Explaining Black Box Predictions and Unveiling Data Artifacts through
  Influence Functions
Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions
Xiaochuang Han
Byron C. Wallace
Yulia Tsvetkov
MILM
FAtt
AAML
TDI
18
164
0
14 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
29
371
0
30 Apr 2020
Adversarial Attacks and Defenses: An Interpretation Perspective
Adversarial Attacks and Defenses: An Interpretation Perspective
Ninghao Liu
Mengnan Du
Ruocheng Guo
Huan Liu
Xia Hu
AAML
24
8
0
23 Apr 2020
CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation
CrossCheck: Rapid, Reproducible, and Interpretable Model Evaluation
Dustin L. Arendt
Zhuanyi Huang
Prasha Shrestha
Ellyn Ayton
M. Glenski
Svitlana Volkova
19
8
0
16 Apr 2020
Human Evaluation of Interpretability: The Case of AI-Generated Music
  Knowledge
Human Evaluation of Interpretability: The Case of AI-Generated Music Knowledge
Haizi Yu
Heinrich Taube
James A. Evans
L. Varshney
6
5
0
15 Apr 2020
Previous
123456789
Next