Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1806.07538
Cited By
Towards Robust Interpretability with Self-Explaining Neural Networks
20 June 2018
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Robust Interpretability with Self-Explaining Neural Networks"
50 / 507 papers shown
Title
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
282
170
0
24 Oct 2020
A Framework to Learn with Interpretation
Jayneel Parekh
Pavlo Mozharovskyi
Florence dÁlché-Buc
AI4CE
FAtt
25
30
0
19 Oct 2020
Altruist: Argumentative Explanations through Local Interpretations of Predictive Models
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
11
13
0
15 Oct 2020
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
252
426
0
15 Oct 2020
Human-interpretable model explainability on high-dimensional data
Damien de Mijolla
Christopher Frye
M. Kunesch
J. Mansir
Ilya Feige
FAtt
17
8
0
14 Oct 2020
Learning Propagation Rules for Attribution Map Generation
Yiding Yang
Jiayan Qiu
Xiuming Zhang
Dacheng Tao
Xinchao Wang
FAtt
38
17
0
14 Oct 2020
Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension
Ekta Sood
Simon Tannert
Diego Frassinelli
Andreas Bulling
Ngoc Thang Vu
HAI
24
54
0
13 Oct 2020
A survey of algorithmic recourse: definitions, formulations, solutions, and prospects
Amir-Hossein Karimi
Gilles Barthe
Bernhard Schölkopf
Isabel Valera
FaML
14
172
0
08 Oct 2020
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Hanjie Chen
Yangfeng Ji
AAML
VLM
13
63
0
01 Oct 2020
Interpretable Machine Learning for COVID-19: An Empirical Study on Severity Prediction Task
Han-Ching Wu
Wenjie Ruan
Jiangtao Wang
Dingchang Zheng
Bei Liu
...
Xiangfei Chai
Jian Chen
Kunwei Li
Shaolin Li
A. Helal
32
25
0
30 Sep 2020
A Comprehensive Survey of Machine Learning Applied to Radar Signal Processing
Ping Lang
Xiongjun Fu
M. Martorella
Jian Dong
Rui Qin
Xianpeng Meng
M. Xie
26
39
0
29 Sep 2020
What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors
Yi-Shan Lin
Wen-Chuan Lee
Z. Berkay Celik
XAI
29
93
0
22 Sep 2020
Transparency and granularity in the SP Theory of Intelligence and its realisation in the SP Computer Model
J. Wolff
11
5
0
07 Sep 2020
How Good is your Explanation? Algorithmic Stability Measures to Assess the Quality of Explanations for Deep Neural Networks
Thomas Fel
David Vigouroux
Rémi Cadène
Thomas Serre
XAI
FAtt
26
31
0
07 Sep 2020
How Useful Are the Machine-Generated Interpretations to General Users? A Human Evaluation on Guessing the Incorrectly Predicted Labels
Hua Shen
Ting-Hao 'Kenneth' Huang
FAtt
HAI
9
56
0
26 Aug 2020
Counterfactual Explanations for Machine Learning on Multivariate Time Series Data
E. Ates
Burak Aksar
V. Leung
A. Coskun
AI4TS
51
65
0
25 Aug 2020
DNN2LR: Interpretation-inspired Feature Crossing for Real-world Tabular Data
Zhaocheng Liu
Qiang Liu
Haoli Zhang
Yuntian Chen
11
12
0
22 Aug 2020
iCaps: An Interpretable Classifier via Disentangled Capsule Networks
Dahuin Jung
Jonghyun Lee
Jihun Yi
Sungroh Yoon
28
12
0
20 Aug 2020
Trust and Medical AI: The challenges we face and the expertise needed to overcome them
Thomas P. Quinn
M. Senadeera
Stephan Jacobs
S. Coghlan
Vuong Le
16
122
0
18 Aug 2020
Tackling COVID-19 through Responsible AI Innovation: Five Steps in the Right Direction
David Leslie
27
67
0
15 Aug 2020
Deep Active Learning by Model Interpretability
Qiang Liu
Zhaocheng Liu
Xiaofang Zhu
Yeliang Xiu
8
4
0
23 Jul 2020
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance
Mattia Carletti
M. Terzi
Gian Antonio Susto
36
42
0
21 Jul 2020
timeXplain -- A Framework for Explaining the Predictions of Time Series Classifiers
Felix Mujkanovic
Vanja Doskoc
Martin Schirneck
Patrick Schäfer
Tobias Friedrich
FAtt
AI4TS
19
23
0
15 Jul 2020
Learning Invariances for Interpretability using Supervised VAE
An-phi Nguyen
María Rodríguez Martínez
DRL
14
2
0
15 Jul 2020
On quantitative aspects of model interpretability
An-phi Nguyen
María Rodríguez Martínez
16
114
0
15 Jul 2020
Concept Learners for Few-Shot Learning
Kaidi Cao
Maria Brbic
J. Leskovec
VLM
OffRL
27
4
0
14 Jul 2020
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
40
776
0
09 Jul 2020
Gaussian Process Regression with Local Explanation
Yuya Yoshikawa
Tomoharu Iwata
FAtt
8
18
0
03 Jul 2020
Drug discovery with explainable artificial intelligence
José Jiménez-Luna
F. Grisoni
G. Schneider
30
625
0
01 Jul 2020
Interpretable and Trustworthy Deepfake Detection via Dynamic Prototypes
Loc Trinh
Michael Tsang
Sirisha Rambhatla
Yan Liu
6
6
0
28 Jun 2020
Not all Failure Modes are Created Equal: Training Deep Neural Networks for Explicable (Mis)Classification
Alberto Olmo
Sailik Sengupta
S. Kambhampati
25
6
0
26 Jun 2020
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
30
73
0
24 Jun 2020
Interpretable Deep Models for Cardiac Resynchronisation Therapy Response Prediction
Esther Puyol-Antón
Cheng Chen
J. Clough
B. Ruijsink
B. Sidhu
...
M. Elliott
Vishal S. Mehta
Daniel Rueckert
C. Rinaldi
A. King
19
32
0
24 Jun 2020
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
Arun Das
P. Rad
XAI
24
593
0
16 Jun 2020
A Semiparametric Approach to Interpretable Machine Learning
Numair Sani
Jaron J. R. Lee
Razieh Nabi
I. Shpitser
15
6
0
08 Jun 2020
Higher-Order Explanations of Graph Neural Networks via Relevant Walks
Thomas Schnake
Oliver Eberle
Jonas Lederer
Shinichi Nakajima
Kristof T. Schütt
Klaus-Robert Muller
G. Montavon
32
215
0
05 Jun 2020
DeepCoDA: personalized interpretability for compositional health data
Thomas P. Quinn
Dang Nguyen
Santu Rana
Sunil R. Gupta
Svetha Venkatesh
9
12
0
02 Jun 2020
Explainable Artificial Intelligence: a Systematic Review
Giulia Vilone
Luca Longo
XAI
22
266
0
29 May 2020
Rationalizing Text Matching: Learning Sparse Alignments via Optimal Transport
Kyle Swanson
L. Yu
Tao Lei
OT
29
37
0
27 May 2020
The best way to select features?
Xin Man
Ernest P. Chan
16
60
0
26 May 2020
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Zixuan Huang
Yin Li
12
138
0
21 May 2020
Towards Interpretable Deep Learning Models for Knowledge Tracing
Yu Lu
De-Wu Wang
Qinggang Meng
Penghe Chen
17
36
0
13 May 2020
A robust algorithm for explaining unreliable machine learning survival models using the Kolmogorov-Smirnov bounds
M. Kovalev
Lev V. Utkin
AAML
21
31
0
05 May 2020
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
33
218
0
01 May 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
38
370
0
30 Apr 2020
TSInsight: A local-global attribution framework for interpretability in time-series data
Shoaib Ahmed Siddiqui
Dominique Mercier
Andreas Dengel
Sheraz Ahmed
FAtt
AI4TS
11
12
0
06 Apr 2020
Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
Hanjie Chen
Guangtao Zheng
Yangfeng Ji
FAtt
36
91
0
04 Apr 2020
Born-Again Tree Ensembles
Thibaut Vidal
Toni Pacheco
Maximilian Schiffer
62
53
0
24 Mar 2020
Invariant Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
179
201
0
22 Mar 2020
Neural Generators of Sparse Local Linear Models for Achieving both Accuracy and Interpretability
Yuya Yoshikawa
Tomoharu Iwata
16
7
0
13 Mar 2020
Previous
1
2
3
...
10
11
8
9
Next