ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07874
  4. Cited By
A Unified Approach to Interpreting Model Predictions
v1v2 (latest)

A Unified Approach to Interpreting Model Predictions

22 May 2017
Scott M. Lundberg
Su-In Lee
    FAtt
ArXiv (abs)PDFHTML

Papers citing "A Unified Approach to Interpreting Model Predictions"

50 / 3,953 papers shown
Title
The Consequences of the Framing of Machine Learning Risk Prediction
  Models: Evaluation of Sepsis in General Wards
The Consequences of the Framing of Machine Learning Risk Prediction Models: Evaluation of Sepsis in General Wards
S. Lauritsen
B. Thiesson
Marianne Johansson Jørgensen
A. Riis
U. Espelund
J. Weile
Jeppe Lange
62
3
0
26 Jan 2021
Better sampling in explanation methods can prevent dieselgate-like
  deception
Better sampling in explanation methods can prevent dieselgate-like deception
Domen Vreš
Marko Robnik-Šikonja
AAML
35
10
0
26 Jan 2021
Model-agnostic interpretation by visualization of feature perturbations
Model-agnostic interpretation by visualization of feature perturbations
Wilson E. Marcílio-Jr
D. M. Eler
Fabricio A. Breve
AAML
36
1
0
26 Jan 2021
How can I choose an explainer? An Application-grounded Evaluation of
  Post-hoc Explanations
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
208
121
0
21 Jan 2021
Orthogonal Least Squares Based Fast Feature Selection for Linear
  Classification
Orthogonal Least Squares Based Fast Feature Selection for Linear Classification
Sikai Zhang
Z. Lang
71
17
0
21 Jan 2021
GLocalX -- From Local to Global Explanations of Black Box AI Models
GLocalX -- From Local to Global Explanations of Black Box AI Models
Mattia Setzu
Riccardo Guidotti
A. Monreale
Franco Turini
D. Pedreschi
F. Giannotti
107
121
0
19 Jan 2021
COVID-Net CT-2: Enhanced Deep Neural Networks for Detection of COVID-19
  from Chest CT Images Through Bigger, More Diverse Learning
COVID-Net CT-2: Enhanced Deep Neural Networks for Detection of COVID-19 from Chest CT Images Through Bigger, More Diverse Learning
Hayden Gunraj
A. Sabri
D. Koff
A. Wong
118
99
0
19 Jan 2021
Fidelity and Privacy of Synthetic Medical Data
Fidelity and Privacy of Synthetic Medical Data
O. Mendelevitch
M. Lesh
93
30
0
18 Jan 2021
Dissonance Between Human and Machine Understanding
Dissonance Between Human and Machine Understanding
Zijian Zhang
Jaspreet Singh
U. Gadiraju
Avishek Anand
127
74
0
18 Jan 2021
Interactive slice visualization for exploring machine learning models
Interactive slice visualization for exploring machine learning models
C. Hurley
Mark O'Connell
Katarina Domijan
FAtt
43
9
0
18 Jan 2021
Generative Counterfactuals for Neural Networks via Attribute-Informed
  Perturbation
Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation
Fan Yang
Ninghao Liu
Mengnan Du
X. Hu
OOD
61
17
0
18 Jan 2021
A Deep Learning Based Ternary Task Classification System Using Gramian
  Angular Summation Field in fNIRS Neuroimaging Data
A Deep Learning Based Ternary Task Classification System Using Gramian Angular Summation Field in fNIRS Neuroimaging Data
Sajila D. Wickramaratne
Md. Shaad Mahmud
56
18
0
14 Jan 2021
U-Noise: Learnable Noise Masks for Interpretable Image Segmentation
U-Noise: Learnable Noise Masks for Interpretable Image Segmentation
Teddy Koker
Fatemehsadat Mireshghallah
Tom Titcombe
Georgios Kaissis
52
22
0
14 Jan 2021
Kernel-based ANOVA decomposition and Shapley effects -- Application to
  global sensitivity analysis
Kernel-based ANOVA decomposition and Shapley effects -- Application to global sensitivity analysis
Sébastien Da Veiga
FAtt
84
26
0
14 Jan 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
186
178
0
13 Jan 2021
Understanding the Effect of Out-of-distribution Examples and Interactive
  Explanations on Human-AI Decision Making
Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making
Han Liu
Vivian Lai
Chenhao Tan
173
121
0
13 Jan 2021
Towards Interpretable Ensemble Learning for Image-based Malware
  Detection
Towards Interpretable Ensemble Learning for Image-based Malware Detection
Yuzhou Lin
Xiaolin Chang
AAML
70
8
0
13 Jan 2021
Automated Detection of Patellofemoral Osteoarthritis from Knee Lateral
  View Radiographs Using Deep Learning: Data from the Multicenter
  Osteoarthritis Study (MOST)
Automated Detection of Patellofemoral Osteoarthritis from Knee Lateral View Radiographs Using Deep Learning: Data from the Multicenter Osteoarthritis Study (MOST)
N. Bayramoglu
M. Nieminen
S. Saarakkala
25
21
0
12 Jan 2021
Explaining the Black-box Smoothly- A Counterfactual Approach
Explaining the Black-box Smoothly- A Counterfactual Approach
Junyu Chen
Yong Du
Yufan He
W. Paul Segars
Ye Li
MedImFAtt
152
105
0
11 Jan 2021
Explain and Predict, and then Predict Again
Explain and Predict, and then Predict Again
Zijian Zhang
Koustav Rudra
Avishek Anand
FAtt
98
51
0
11 Jan 2021
The Shapley Value of Classifiers in Ensemble Games
The Shapley Value of Classifiers in Ensemble Games
Benedek Rozemberczki
Rik Sarkar
FAttFedMLTDI
124
33
0
06 Jan 2021
Risk markers by sex for in-hospital mortality in patients with acute
  coronary syndrome: a machine learning approach
Risk markers by sex for in-hospital mortality in patients with acute coronary syndrome: a machine learning approach
Blanca Vázquez
Gibran Fuentes-Pineda
Fabian Garcia
G. Borrayo
J. Prohias
13
4
0
06 Jan 2021
On the price of explainability for some clustering problems
On the price of explainability for some clustering problems
E. Laber
Lucas Murtinho
90
26
0
05 Jan 2021
Explainable AI and Adoption of Financial Algorithmic Advisors: an
  Experimental Study
Explainable AI and Adoption of Financial Algorithmic Advisors: an Experimental Study
D. David
Yehezkel S. Resheff
Talia Tron
44
24
0
05 Jan 2021
On Baselines for Local Feature Attributions
On Baselines for Local Feature Attributions
Johannes Haug
Stefan Zurn
Peter El-Jiz
Gjergji Kasneci
FAtt
64
31
0
04 Jan 2021
Outcome-Explorer: A Causality Guided Interactive Visual Interface for
  Interpretable Algorithmic Decision Making
Outcome-Explorer: A Causality Guided Interactive Visual Interface for Interpretable Algorithmic Decision Making
Md. Naimul Hoque
Klaus Mueller
CML
151
30
0
03 Jan 2021
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and
  Improving Models
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models
Tongshuang Wu
Marco Tulio Ribeiro
Jeffrey Heer
Daniel S. Weld
173
252
0
01 Jan 2021
Socially Responsible AI Algorithms: Issues, Purposes, and Challenges
Socially Responsible AI Algorithms: Issues, Purposes, and Challenges
Lu Cheng
Kush R. Varshney
Huan Liu
FaML
170
154
0
01 Jan 2021
FastIF: Scalable Influence Functions for Efficient Model Interpretation
  and Debugging
FastIF: Scalable Influence Functions for Efficient Model Interpretation and Debugging
Han Guo
Nazneen Rajani
Peter Hase
Joey Tianyi Zhou
Caiming Xiong
TDI
135
116
0
31 Dec 2020
Quantitative Evaluations on Saliency Methods: An Experimental Study
Quantitative Evaluations on Saliency Methods: An Experimental Study
Xiao-hui Li
Yuhan Shi
Haoyang Li
Wei Bai
Yuanwei Song
Caleb Chen Cao
Lei Chen
FAttXAI
108
20
0
31 Dec 2020
Detecting Anomalous Invoice Line Items in the Legal Case Lifecycle
Detecting Anomalous Invoice Line Items in the Legal Case Lifecycle
V. Constantinou
M. Kabiri
AILaw
47
2
0
28 Dec 2020
dalex: Responsible Machine Learning with Interactive Explainability and
  Fairness in Python
dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python
Hubert Baniecki
Wojciech Kretowicz
Piotr Piątyszek
J. Wiśniewski
P. Biecek
FaML
98
97
0
28 Dec 2020
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaMLXAI
214
692
0
28 Dec 2020
Inserting Information Bottlenecks for Attribution in Transformers
Inserting Information Bottlenecks for Attribution in Transformers
Zhiying Jiang
Raphael Tang
Ji Xin
Jimmy J. Lin
57
6
0
27 Dec 2020
Modeling Dispositional and Initial learned Trust in Automated Vehicles
  with Predictability and Explainability
Modeling Dispositional and Initial learned Trust in Automated Vehicles with Predictability and Explainability
Jackie Ayoub
X. J. Yang
Feng Zhou
77
68
0
25 Dec 2020
To what extent do human explanations of model behavior align with actual
  model behavior?
To what extent do human explanations of model behavior align with actual model behavior?
Grusha Prasad
Yixin Nie
Joey Tianyi Zhou
Robin Jia
Douwe Kiela
Adina Williams
75
28
0
24 Dec 2020
QUACKIE: A NLP Classification Task With Ground Truth Explanations
QUACKIE: A NLP Classification Task With Ground Truth Explanations
Yves Rychener
X. Renard
Djamé Seddah
P. Frossard
Marcin Detyniecki
41
3
0
24 Dec 2020
On the Granularity of Explanations in Model Agnostic NLP
  Interpretability
On the Granularity of Explanations in Model Agnostic NLP Interpretability
Yves Rychener
X. Renard
Djamé Seddah
P. Frossard
Marcin Detyniecki
MILMFAtt
83
3
0
24 Dec 2020
Interpreting Deep Learning Models for Epileptic Seizure Detection on EEG
  signals
Interpreting Deep Learning Models for Epileptic Seizure Detection on EEG signals
Valentin Gabeff
T. Teijeiro
Marina Zapater
L. Cammoun
S. Rheims
P. Ryvlin
David Atienza Alonso
50
59
0
22 Dec 2020
Algorithmic Recourse in the Wild: Understanding the Impact of Data and
  Model Shifts
Algorithmic Recourse in the Wild: Understanding the Impact of Data and Model Shifts
Kaivalya Rawal
Ece Kamar
Himabindu Lakkaraju
96
42
0
22 Dec 2020
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep
  Learning
Unbox the Blackbox: Predict and Interpret YouTube Viewership Using Deep Learning
Jiaheng Xie
Xinyu Liu
HAI
106
11
0
21 Dec 2020
On Relating 'Why?' and 'Why Not?' Explanations
On Relating 'Why?' and 'Why Not?' Explanations
Alexey Ignatiev
Nina Narodytska
Nicholas M. Asher
Sasha Rubin
XAIFAttLRM
69
26
0
21 Dec 2020
Biased Models Have Biased Explanations
Biased Models Have Biased Explanations
Aditya Jain
Manish Ravula
Joydeep Ghosh
FaML
70
19
0
20 Dec 2020
Towards Robust Explanations for Deep Neural Networks
Towards Robust Explanations for Deep Neural Networks
Ann-Kathrin Dombrowski
Christopher J. Anders
K. Müller
Pan Kessel
FAtt
102
64
0
18 Dec 2020
Research Reproducibility as a Survival Analysis
Research Reproducibility as a Survival Analysis
Edward Raff
SyDaAI4CE
67
19
0
17 Dec 2020
Transformer Interpretability Beyond Attention Visualization
Transformer Interpretability Beyond Attention Visualization
Hila Chefer
Shir Gur
Lior Wolf
152
681
0
17 Dec 2020
Series Saliency: Temporal Interpretation for Multivariate Time Series
  Forecasting
Series Saliency: Temporal Interpretation for Multivariate Time Series Forecasting
Qingyi Pan
Wenbo Hu
Jun Zhu
FAttAI4TS
58
7
0
16 Dec 2020
Latent-CF: A Simple Baseline for Reverse Counterfactual Explanations
Latent-CF: A Simple Baseline for Reverse Counterfactual Explanations
R. Balasubramanian
Samuel Sharpe
Brian Barr
J. Wittenbach
C. Bayan Bruss
BDL
65
18
0
16 Dec 2020
AIST: An Interpretable Attention-based Deep Learning Model for Crime
  Prediction
AIST: An Interpretable Attention-based Deep Learning Model for Crime Prediction
Yeasir Rayhan
T. Hashem
43
24
0
16 Dec 2020
Developing Future Human-Centered Smart Cities: Critical Analysis of
  Smart City Security, Interpretability, and Ethical Challenges
Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges
Kashif Ahmad
Majdi Maabreh
M. Ghaly
Khalil Khan
Junaid Qadir
Ala I. Al-Fuqaha
121
157
0
14 Dec 2020
Previous
123...666768...787980
Next