Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1705.07874
Cited By
v1
v2 (latest)
A Unified Approach to Interpreting Model Predictions
22 May 2017
Scott M. Lundberg
Su-In Lee
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"A Unified Approach to Interpreting Model Predictions"
50 / 3,953 papers shown
Title
On the Sensitivity and Stability of Model Interpretations in NLP
Fan Yin
Zhouxing Shi
Cho-Jui Hsieh
Kai-Wei Chang
FAtt
92
33
0
18 Apr 2021
On the Complexity of SHAP-Score-Based Explanations: Tractability via Knowledge Compilation and Non-Approximability Results
Marcelo Arenas
Pablo Barceló
Leopoldo Bertossi
Mikaël Monet
FAtt
79
35
0
16 Apr 2021
NICE: An Algorithm for Nearest Instance Counterfactual Explanations
Dieter Brughmans
Pieter Leyman
David Martens
83
65
0
15 Apr 2021
Text Guide: Improving the quality of long text classification by a text selection method based on feature importance
K. Fiok
W. Karwowski
Edgar Gutierrez-Franco
Mohammad Reza Davahli
Maciej Wilamowski
T. Ahram
Awad M. Aljuaid
Jozef Zurada
VLM
62
34
0
15 Apr 2021
Evaluating Standard Feature Sets Towards Increased Generalisability and Explainability of ML-based Network Intrusion Detection
Mohanad Sarhan
S. Layeghy
Marius Portmann
72
69
0
15 Apr 2021
What Makes a Scientific Paper be Accepted for Publication?
Panagiotis Fytas
Georgios Rizos
Lucia Specia
45
10
0
14 Apr 2021
To Trust or Not to Trust a Regressor: Estimating and Explaining Trustworthiness of Regression Predictions
K. D. Bie
Ana Lucic
H. Haned
FAtt
61
11
0
14 Apr 2021
Enabling Machine Learning Algorithms for Credit Scoring -- Explainable Artificial Intelligence (XAI) methods for clear understanding complex predictive models
P. Biecek
M. Chlebus
Janusz Gajda
Alicja Gosiewska
A. Kozak
Dominik Ogonowski
Jakub Sztachelski
P. Wojewnik
37
11
0
14 Apr 2021
Generative Causal Explanations for Graph Neural Networks
Wanyu Lin
Hao Lan
Baochun Li
CML
67
179
0
14 Apr 2021
Mutual Information Preserving Back-propagation: Learn to Invert for Faithful Attribution
Huiqi Deng
Na Zou
Weifu Chen
Guo-Can Feng
Mengnan Du
Helen Zhou
FAtt
71
6
0
14 Apr 2021
Conclusive Local Interpretation Rules for Random Forests
Ioannis Mollas
Nick Bassiliades
Grigorios Tsoumakas
FaML
FAtt
72
18
0
13 Apr 2021
Enhancing User' s Income Estimation with Super-App Alternative Data
Gabriel Suarez
Juan Raful
María A. Luque
C. Valencia
Alejandro Correa-Bahnsen
14
1
0
12 Apr 2021
Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML
S. Narkar
Yunfeng Zhang
Q. V. Liao
Dakuo Wang
Justin D. Weisz
64
25
0
09 Apr 2021
An Empirical Comparison of Instance Attribution Methods for NLP
Pouya Pezeshkpour
Sarthak Jain
Byron C. Wallace
Sameer Singh
TDI
122
35
0
09 Apr 2021
Question-Driven Design Process for Explainable AI User Experiences
Q. V. Liao
Milena Pribić
Jaesik Han
Sarah Miller
Daby M. Sow
128
54
0
08 Apr 2021
Deep Interpretable Models of Theory of Mind
Ini Oguntola
Dana Hughes
Katia Sycara
HAI
81
28
0
07 Apr 2021
Towards a Rigorous Evaluation of Explainability for Multivariate Time Series
Rohit Saluja
A. Malhi
Samanta Knapic
Kary Främling
C. Cavdar
XAI
AI4TS
62
19
0
06 Apr 2021
Shapley Explanation Networks
Rui Wang
Xiaoqian Wang
David I. Inouye
TDI
FAtt
94
46
0
06 Apr 2021
Late fusion of machine learning models using passively captured interpersonal social interactions and motion from smartphones predicts decompensation in heart failure
Ayse S. Cakmak
Samuel Densen
Gabriel Najarro
Pratik Rout
Christopher Rozell
O. Inan
Amit J. Shah
Gari D. Clifford
20
0
0
04 Apr 2021
STARdom: an architecture for trusted and secure human-centered manufacturing systems
Jože M. Rožanec
Patrik Zajec
K. Kenda
I. Novalija
B. Fortuna
...
Diego Reforgiato Recupero
D. Kyriazis
G. Sofianidis
Spyros Theodoropoulos
John Soldatos
53
7
0
02 Apr 2021
Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey
Thomas Rojat
Raphael Puget
David Filliat
Javier Del Ser
R. Gelin
Natalia Díaz Rodríguez
XAI
AI4TS
103
135
0
02 Apr 2021
Coalitional strategies for efficient individual prediction explanation
Gabriel Ferrettini
Elodie Escriva
Julien Aligon
Jean-Baptiste Excoffier
C. Soulé-Dupuy
63
19
0
01 Apr 2021
NetAdaptV2: Efficient Neural Architecture Search with Fast Super-Network Training and Architecture Optimization
Tien-Ju Yang
Yi-Lun Liao
Vivienne Sze
118
57
0
31 Mar 2021
Trusted Artificial Intelligence: Towards Certification of Machine Learning Applications
P. M. Winter
Sebastian K. Eder
J. Weissenbock
Christoph Schwald
Thomas Doms
Tom Vogt
Sepp Hochreiter
Bernhard Nessler
115
25
0
31 Mar 2021
MISA: Online Defense of Trojaned Models using Misattributions
Panagiota Kiourti
Wenchao Li
Anirban Roy
Karan Sikka
Susmit Jha
56
10
0
29 Mar 2021
Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers
Hila Chefer
Shir Gur
Lior Wolf
ViT
115
328
0
29 Mar 2021
Efficient Explanations from Empirical Explainers
Robert Schwarzenberg
Nils Feldhus
Sebastian Möller
FAtt
93
9
0
29 Mar 2021
Adaptive Autonomy in Human-on-the-Loop Vision-Based Robotics Systems
Sophia J. Abraham
Zachariah Carmichael
Sreya Banerjee
Rosaura G. VidalMata
Ankit Agrawal
M. N. A. Islam
Walter J. Scheirer
J. Cleland-Huang
74
20
0
28 Mar 2021
A Multistakeholder Approach Towards Evaluating AI Transparency Mechanisms
Ana Lucic
Madhulika Srikumar
Umang Bhatt
Alice Xiang
Ankur Taly
Q. V. Liao
Maarten de Rijke
57
5
0
27 Mar 2021
Using Eye-tracking Data to Predict Situation Awareness in Real Time during Takeover Transitions in Conditionally Automated Driving
Feng Zhou
X. J. Yang
J. D. Winter
48
101
0
27 Mar 2021
Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice
David S. Watson
Limor Gultchin
Ankur Taly
Luciano Floridi
88
64
0
27 Mar 2021
FeatureEnVi: Visual Analytics for Feature Engineering Using Stepwise Selection and Semi-Automatic Extraction Approaches
Angelos Chatzimparmpas
Rafael M. Martins
Kostiantyn Kucher
Andreas Kerren
82
24
0
26 Mar 2021
Quantitative Prediction on the Enantioselectivity of Multiple Chiral Iodoarene Scaffolds Based on Whole Geometry
Prema Dhorma Lama
Surendra Kumar
Kang Kim
Sang-Doo Ahn
Mi-hyun Kim
23
0
0
25 Mar 2021
ECINN: Efficient Counterfactuals from Invertible Neural Networks
Frederik Hvilshoj
Alexandros Iosifidis
Ira Assent
BDL
77
26
0
25 Mar 2021
The Shapley Value of coalition of variables provides better explanations
Salim I. Amoukou
Nicolas Brunel
Tangi Salaun
FAtt
TDI
58
5
0
24 Mar 2021
Explainability: Relevance based Dynamic Deep Learning Algorithm for Fault Detection and Diagnosis in Chemical Processes
P. Agarwal
Melih Tamer
H. Budman
AAML
47
44
0
22 Mar 2021
Explaining Black-Box Algorithms Using Probabilistic Contrastive Counterfactuals
Sainyam Galhotra
Romila Pradhan
Babak Salimi
CML
105
110
0
22 Mar 2021
Interpreting Deep Learning Models with Marginal Attribution by Conditioning on Quantiles
M. Merz
Ronald Richman
A. Tsanakas
M. Wüthrich
FAtt
40
11
0
22 Mar 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Zifan Wang
Matt Fredrikson
Anupam Datta
OOD
FAtt
83
26
0
20 Mar 2021
Understanding Heart-Failure Patients EHR Clinical Features via SHAP Interpretation of Tree-Based Machine Learning Model Predictions
Shuyu Lu
Ruoyu Chen
Wei Wei
Xinghua Lu
FAtt
31
28
0
20 Mar 2021
Local Interpretations for Explainable Natural Language Processing: A Survey
Siwen Luo
Hamish Ivison
S. Han
Josiah Poon
MILM
120
52
0
20 Mar 2021
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAML
FaML
XAI
HAI
84
344
0
19 Mar 2021
Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations
Pau Rodríguez López
Massimo Caccia
Alexandre Lacoste
L. Zamparo
I. Laradji
Laurent Charlin
David Vazquez
AAML
102
57
0
18 Mar 2021
Glioblastoma Multiforme Prognosis: MRI Missing Modality Generation, Segmentation and Radiogenomic Survival Prediction
Mobarakol Islam
Navodini Wijethilake
Hongliang Ren
65
33
0
17 Mar 2021
Neural Networks and Denotation
E. Allen
43
0
0
15 Mar 2021
Explaining Credit Risk Scoring through Feature Contribution Alignment with Expert Risk Analysts
Ayoub El Qadi
Natalia Díaz Rodríguez
M. Trocan
Thomas Frossard
33
6
0
15 Mar 2021
A new interpretable unsupervised anomaly detection method based on residual explanation
David F. N. Oliveira
L. Vismari
A. M. Nascimento
J. R. de Almeida
P. Cugnasca
J. Camargo
L. Almeida
Rafael Gripp
Marcelo M. Neves
AAML
111
19
0
14 Mar 2021
Explaining Network Intrusion Detection System Using Explainable AI Framework
Shraddha Mane
Dattaraj J. Rao
AAML
59
71
0
12 Mar 2021
Interpretable Data-driven Methods for Subgrid-scale Closure in LES for Transcritical LOX/GCH4 Combustion
Wai Tong Chung
A. Mishra
M. Ihme
AI4CE
36
22
0
11 Mar 2021
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Valerie Chen
Jeffrey Li
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
74
31
0
10 Mar 2021
Previous
1
2
3
...
64
65
66
...
78
79
80
Next