Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1702.08608
Cited By
Towards A Rigorous Science of Interpretable Machine Learning
28 February 2017
Finale Doshi-Velez
Been Kim
XAI
FaML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards A Rigorous Science of Interpretable Machine Learning"
50 / 404 papers shown
Title
Attribution-based XAI Methods in Computer Vision: A Review
Kumar Abhishek
Deeksha Kamath
27
18
0
27 Nov 2022
MEGAN: Multi-Explanation Graph Attention Network
Jonas Teufel
Luca Torresi
Patrick Reiser
Pascal Friederich
16
8
0
23 Nov 2022
Concept-based Explanations using Non-negative Concept Activation Vectors and Decision Tree for CNN Models
Gayda Mutahar
Tim Miller
FAtt
24
6
0
19 Nov 2022
CRAFT: Concept Recursive Activation FacTorization for Explainability
Thomas Fel
Agustin Picard
Louis Bethune
Thibaut Boissin
David Vigouroux
Julien Colin
Rémi Cadène
Thomas Serre
19
102
0
17 Nov 2022
Explainable Artificial Intelligence: Precepts, Methods, and Opportunities for Research in Construction
Peter E. D. Love
Weili Fang
J. Matthews
Stuart Porter
Hanbin Luo
L. Ding
XAI
29
7
0
12 Nov 2022
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Individualized and Global Feature Attributions for Gradient Boosted Trees in the Presence of
ℓ
2
\ell_2
ℓ
2
Regularization
Qingyao Sun
26
2
0
08 Nov 2022
ViT-CX: Causal Explanation of Vision Transformers
Weiyan Xie
Xiao-hui Li
Caleb Chen Cao
Nevin L.Zhang
ViT
24
17
0
06 Nov 2022
Towards Human Cognition Level-based Experiment Design for Counterfactual Explanations (XAI)
M. Nizami
Muhammad Yaseen Khan
Alessandro Bogliolo
11
3
0
31 Oct 2022
Generating Hierarchical Explanations on Text Classification Without Connecting Rules
Yiming Ju
Yuanzhe Zhang
Kang Liu
Jun Zhao
FAtt
18
3
0
24 Oct 2022
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
I. Nejadgholi
Esma Balkir
Kathleen C. Fraser
S. Kiritchenko
32
3
0
19 Oct 2022
Towards Explaining Distribution Shifts
Sean Kulinski
David I. Inouye
OffRL
FAtt
35
23
0
19 Oct 2022
Machine Learning in Transaction Monitoring: The Prospect of xAI
Julie Gerlings
Ioanna D. Constantiou
17
2
0
14 Oct 2022
On the Explainability of Natural Language Processing Deep Models
Julia El Zini
M. Awad
25
82
0
13 Oct 2022
Neurosymbolic Programming for Science
Jennifer J. Sun
Megan Tjandrasuwita
Atharva Sehgal
Armando Solar-Lezama
Swarat Chaudhuri
Yisong Yue
Omar Costilla-Reyes
NAI
35
12
0
10 Oct 2022
Using Knowledge Distillation to improve interpretable models in a retail banking context
Maxime Biehler
Mohamed Guermazi
Célim Starck
49
2
0
30 Sep 2022
Empowering the trustworthiness of ML-based critical systems through engineering activities
J. Mattioli
Agnès Delaborde
Souhaiel Khalfaoui
Freddy Lecue
H. Sohier
F. Jurie
9
2
0
30 Sep 2022
Counterfactual Explanations Using Optimization With Constraint Learning
Donato Maragno
Tabea E. Rober
Ilker Birbil
CML
47
10
0
22 Sep 2022
XClusters: Explainability-first Clustering
Hyunseung Hwang
Steven Euijong Whang
21
5
0
22 Sep 2022
Summarization Programs: Interpretable Abstractive Summarization with Neural Modular Trees
Swarnadeep Saha
Shiyue Zhang
Peter Hase
Mohit Bansal
26
19
0
21 Sep 2022
The Ability of Image-Language Explainable Models to Resemble Domain Expertise
P. Werner
Anna Zapaishchykova
Ujjwal Ratan
40
2
0
19 Sep 2022
MSVIPER: Improved Policy Distillation for Reinforcement-Learning-Based Robot Navigation
Aaron M. Roth
Jing Liang
Ram D. Sriram
Elham Tabassi
Dinesh Manocha
24
1
0
19 Sep 2022
Explainable AI for clinical and remote health applications: a survey on tabular and time series data
Flavio Di Martino
Franca Delmastro
AI4TS
23
91
0
14 Sep 2022
Lost in Translation: Reimagining the Machine Learning Life Cycle in Education
Lydia T. Liu
Serena Wang
Tolani A. Britton
Rediet Abebe
AI4Ed
19
1
0
08 Sep 2022
Making the black-box brighter: interpreting machine learning algorithm for forecasting drilling accidents
E. Gurina
Nikita Klyuchnikov
Ksenia Antipova
D. Koroteev
FAtt
25
8
0
06 Sep 2022
Intelligent Traffic Monitoring with Hybrid AI
Ehsan Qasemi
A. Oltramari
11
3
0
31 Aug 2022
SoK: Explainable Machine Learning for Computer Security Applications
A. Nadeem
D. Vos
Clinton Cao
Luca Pajola
Simon Dieck
Robert Baumgartner
S. Verwer
29
40
0
22 Aug 2022
Causality-Inspired Taxonomy for Explainable Artificial Intelligence
Pedro C. Neto
Tiago B. Gonccalves
João Ribeiro Pinto
W. Silva
Ana F. Sequeira
Arun Ross
Jaime S. Cardoso
XAI
28
12
0
19 Aug 2022
An Empirical Comparison of Explainable Artificial Intelligence Methods for Clinical Data: A Case Study on Traumatic Brain Injury
Amin Nayebi
Sindhu Tipirneni
Brandon Foreman
Chandan K. Reddy
V. Subbian
26
3
0
13 Aug 2022
An Interpretability Evaluation Benchmark for Pre-trained Language Models
Ya-Ming Shen
Lijie Wang
Ying Chen
Xinyan Xiao
Jing Liu
Hua-Hong Wu
31
4
0
28 Jul 2022
LightX3ECG: A Lightweight and eXplainable Deep Learning System for 3-lead Electrocardiogram Classification
Khiem H. Le
Hieu H. Pham
Thao BT. Nguyen
Tu Nguyen
T. Thanh
Cuong D. Do
18
34
0
25 Jul 2022
A general-purpose method for applying Explainable AI for Anomaly Detection
John Sipple
Abdou Youssef
22
14
0
23 Jul 2022
A clinically motivated self-supervised approach for content-based image retrieval of CT liver images
Kristoffer Wickstrøm
Eirik Agnalt Ostmo
Keyur Radiya
Karl Øyvind Mikalsen
Michael C. Kampffmeyer
Robert Jenssen
SSL
21
13
0
11 Jul 2022
Evaluating Human-like Explanations for Robot Actions in Reinforcement Learning Scenarios
Francisco Cruz
Charlotte Young
Richard Dazeley
Peter Vamplew
22
9
0
07 Jul 2022
"Even if ..." -- Diverse Semifactual Explanations of Reject
André Artelt
Barbara Hammer
33
12
0
05 Jul 2022
FRAME: Evaluating Rationale-Label Consistency Metrics for Free-Text Rationales
Aaron Chan
Shaoliang Nie
Liang Tan
Xiaochang Peng
Hamed Firooz
Maziar Sanjabi
Xiang Ren
40
9
0
02 Jul 2022
Why we do need Explainable AI for Healthcare
Giovanni Cina
Tabea E. Rober
Rob Goedhart
Ilker Birbil
30
14
0
30 Jun 2022
Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior
Jean-Stanislas Denain
Jacob Steinhardt
AAML
31
7
0
27 Jun 2022
Stop ordering machine learning algorithms by their explainability! A user-centered investigation of performance and explainability
L. Herm
Kai Heinrich
Jonas Wanner
Christian Janiesch
13
84
0
20 Jun 2022
Towards ML Methods for Biodiversity: A Novel Wild Bee Dataset and Evaluations of XAI Methods for ML-Assisted Rare Species Annotations
Teodor Chiaburu
F. Biessmann
Frank Haußer
30
2
0
15 Jun 2022
Mediators: Conversational Agents Explaining NLP Model Behavior
Nils Feldhus
A. Ravichandran
Sebastian Möller
27
16
0
13 Jun 2022
Challenges in Applying Explainability Methods to Improve the Fairness of NLP Models
Esma Balkir
S. Kiritchenko
I. Nejadgholi
Kathleen C. Fraser
21
36
0
08 Jun 2022
Explainable Artificial Intelligence (XAI) for Internet of Things: A Survey
İbrahim Kök
Feyza Yıldırım Okay
Özgecan Muyanlı
S. Özdemir
XAI
14
51
0
07 Jun 2022
A Human-Centric Take on Model Monitoring
Murtuza N. Shergadwala
Himabindu Lakkaraju
K. Kenthapadi
37
9
0
06 Jun 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
20
24
0
05 Jun 2022
OmniXAI: A Library for Explainable AI
Wenzhuo Yang
Hung Le
Tanmay Laud
Silvio Savarese
S. Hoi
SyDa
21
39
0
01 Jun 2022
Attribution-based Explanations that Provide Recourse Cannot be Robust
H. Fokkema
R. D. Heide
T. Erven
FAtt
44
18
0
31 May 2022
MACE: An Efficient Model-Agnostic Framework for Counterfactual Explanation
Wenzhuo Yang
Jia Li
Caiming Xiong
S. Hoi
CML
19
13
0
31 May 2022
Gradient-based Counterfactual Explanations using Tractable Probabilistic Models
Xiaoting Shao
Kristian Kersting
BDL
22
1
0
16 May 2022
Can counterfactual explanations of AI systems' predictions skew lay users' causal intuitions about the world? If so, can we correct for that?
Marko Tešić
U. Hahn
CML
14
5
0
12 May 2022
Previous
1
2
3
4
5
6
7
8
9
Next