ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07874
  4. Cited By
A Unified Approach to Interpreting Model Predictions
v1v2 (latest)

A Unified Approach to Interpreting Model Predictions

22 May 2017
Scott M. Lundberg
Su-In Lee
    FAtt
ArXiv (abs)PDFHTML

Papers citing "A Unified Approach to Interpreting Model Predictions"

50 / 3,953 papers shown
Title
Information-theoretic Evolution of Model Agnostic Global Explanations
Information-theoretic Evolution of Model Agnostic Global Explanations
Sukriti Verma
Nikaash Puri
Piyush B. Gupta
Balaji Krishnamurthy
FAtt
64
0
0
14 May 2021
Quantified Sleep: Machine learning techniques for observational n-of-1
  studies
Quantified Sleep: Machine learning techniques for observational n-of-1 studies
G. Truda
AI4TS
19
2
0
14 May 2021
Agree to Disagree: When Deep Learning Models With Identical
  Architectures Produce Distinct Explanations
Agree to Disagree: When Deep Learning Models With Identical Architectures Produce Distinct Explanations
Matthew Watson
Bashar Awwad Shiekh Hasan
Noura Al Moubayed
OOD
56
23
0
14 May 2021
SAT-Based Rigorous Explanations for Decision Lists
SAT-Based Rigorous Explanations for Decision Lists
Alexey Ignatiev
Sasha Rubin
XAI
65
46
0
14 May 2021
Discovering the Rationale of Decisions: Experiments on Aligning Learning
  and Reasoning
Discovering the Rationale of Decisions: Experiments on Aligning Learning and Reasoning
Cor Steging
S. Renooij
Bart Verheij
40
21
0
14 May 2021
Biometrics: Trust, but Verify
Biometrics: Trust, but Verify
Anil K. Jain
Debayan Deb
Joshua J. Engelsma
FaML
91
84
0
14 May 2021
Bias, Fairness, and Accountability with AI and ML Algorithms
Bias, Fairness, and Accountability with AI and ML Algorithms
Neng-Zhi Zhou
Zach Zhang
V. Nair
Harsh Singhal
Jie Chen
Agus Sudjianto
FaML
125
9
0
13 May 2021
Sanity Simulations for Saliency Methods
Sanity Simulations for Saliency Methods
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
FAtt
104
18
0
13 May 2021
Causally motivated Shortcut Removal Using Auxiliary Labels
Causally motivated Shortcut Removal Using Auxiliary Labels
Maggie Makar
Ben Packer
D. Moldovan
Davis W. Blalock
Yoni Halpern
Alexander DÁmour
OODCML
89
75
0
13 May 2021
Explainable Machine Learning for Fraud Detection
Explainable Machine Learning for Fraud Detection
I. Psychoula
A. Gutmann
Pradip Mainali
Sharon H. Lee
Paul Dunphy
F. Petitcolas
FaML
139
37
0
13 May 2021
Privacy Inference Attacks and Defenses in Cloud-based Deep Neural
  Network: A Survey
Privacy Inference Attacks and Defenses in Cloud-based Deep Neural Network: A Survey
Xiaoyu Zhang
Chao Chen
Yi Xie
Xiaofeng Chen
Jun Zhang
Yang Xiang
FedML
58
7
0
13 May 2021
A hybrid machine learning/deep learning COVID-19 severity predictive
  model from CT images and clinical data
A hybrid machine learning/deep learning COVID-19 severity predictive model from CT images and clinical data
M. Chieregato
Fabio Frangiamore
M. Morassi
C. Baresi
S. Nici
C. Bassetti
C. Bnà
M. Galelli
111
75
0
13 May 2021
What's wrong with this video? Comparing Explainers for Deepfake
  Detection
What's wrong with this video? Comparing Explainers for Deepfake Detection
Samuele Pino
Mark J. Carman
Paolo Bestagini
AAML
44
8
0
12 May 2021
SimNet: Accurate and High-Performance Computer Architecture Simulation
  using Deep Learning
SimNet: Accurate and High-Performance Computer Architecture Simulation using Deep Learning
Lingda Li
Santosh Pandey
T. Flynn
Hang Liu
Noel Wheeler
A. Hoisie
46
8
0
12 May 2021
Is Gender "In-the-Wild" Inference Really a Solved Problem?
Is Gender "In-the-Wild" Inference Really a Solved Problem?
Tiago Roxo
Hugo Manuel Proença
CVBM
60
4
0
12 May 2021
Counterfactual Explanations for Neural Recommenders
Counterfactual Explanations for Neural Recommenders
Khanh Tran
Azin Ghazimatin
Rishiraj Saha Roy
AAMLCML
112
66
0
11 May 2021
Rationalization through Concepts
Rationalization through Concepts
Diego Antognini
Boi Faltings
FAtt
124
22
0
11 May 2021
DEEMD: Drug Efficacy Estimation against SARS-CoV-2 based on cell
  Morphology with Deep multiple instance learning
DEEMD: Drug Efficacy Estimation against SARS-CoV-2 based on cell Morphology with Deep multiple instance learning
M. Saberian
Kathleen P. Moriarty
A. Olmstead
Christian Hallgrimson
Franccois Jean
I. Nabi
Maxwell W. Libbrecht
Ghassan Hamarneh
59
12
0
10 May 2021
The $s$-value: evaluating stability with respect to distributional
  shifts
The sss-value: evaluating stability with respect to distributional shifts
Suyash Gupta
Dominik Rothenhausler
108
16
0
07 May 2021
A Framework of Explanation Generation toward Reliable Autonomous Robots
A Framework of Explanation Generation toward Reliable Autonomous Robots
Tatsuya Sakai
Kazuki Miyazawa
Takato Horii
Takayuki Nagai
85
8
0
06 May 2021
Improving the Faithfulness of Attention-based Explanations with
  Task-specific Information for Text Classification
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
G. Chrysostomou
Nikolaos Aletras
87
38
0
06 May 2021
Explainable Artificial Intelligence for Human Decision-Support System in
  Medical Domain
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain
Samanta Knapic
A. Malhi
Rohit Saluja
Kary Främling
29
102
0
05 May 2021
When Fair Ranking Meets Uncertain Inference
When Fair Ranking Meets Uncertain Inference
Avijit Ghosh
Ritam Dutt
Christo Wilson
117
46
0
05 May 2021
Attack-agnostic Adversarial Detection on Medical Data Using Explainable
  Machine Learning
Attack-agnostic Adversarial Detection on Medical Data Using Explainable Machine Learning
Matthew Watson
Noura Al Moubayed
AAMLMedIm
50
22
0
05 May 2021
Explaining a Series of Models by Propagating Shapley Values
Explaining a Series of Models by Propagating Shapley Values
Hugh Chen
Scott M. Lundberg
Su-In Lee
TDIFAtt
110
139
0
30 Apr 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
150
80
0
30 Apr 2021
Finding High-Value Training Data Subset through Differentiable Convex
  Programming
Finding High-Value Training Data Subset through Differentiable Convex Programming
Soumik Das
Arshdeep Singh
Saptarshi Chatterjee
S. Bhattacharya
Sourangshu Bhattacharya
TDI
43
7
0
28 Apr 2021
Do Feature Attribution Methods Correctly Attribute Features?
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAttXAI
119
136
0
27 Apr 2021
From Human Explanation to Model Interpretability: A Framework Based on
  Weight of Evidence
From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence
David Alvarez-Melis
Harmanpreet Kaur
Hal Daumé
Hanna M. Wallach
Jennifer Wortman Vaughan
FAtt
105
31
0
27 Apr 2021
Metamorphic Detection of Repackaged Malware
Metamorphic Detection of Repackaged Malware
S. Singh
Gail E. Kaiser
41
8
0
27 Apr 2021
Detection of Fake Users in SMPs Using NLP and Graph Embeddings
Detection of Fake Users in SMPs Using NLP and Graph Embeddings
Manojit Chakraborty
Shubham Das
R. Mamidi
GNN
14
6
0
27 Apr 2021
Extractive and Abstractive Explanations for Fact-Checking and Evaluation
  of News
Extractive and Abstractive Explanations for Fact-Checking and Evaluation of News
Ashkan Kazemi
Zehua Li
Verónica Pérez-Rosas
Rada Mihalcea
78
14
0
27 Apr 2021
TrustyAI Explainability Toolkit
TrustyAI Explainability Toolkit
Rob Geada
Tommaso Teofili
Rui Vieira
Rebecca Whitworth
Daniele Zonca
70
2
0
26 Apr 2021
Bridging observation, theory and numerical simulation of the ocean using
  Machine Learning
Bridging observation, theory and numerical simulation of the ocean using Machine Learning
Maike Sonnewald
Redouane Lguensat
Daniel C. Jones
P. Dueben
J. Brajard
Venkatramani Balaji
AI4ClAI4CE
102
101
0
26 Apr 2021
Weakly Supervised Multi-task Learning for Concept-based Explainability
Weakly Supervised Multi-task Learning for Concept-based Explainability
Catarina Belém
Vladimir Balayan
Pedro Saleiro
P. Bizarro
134
10
0
26 Apr 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
105
20
0
26 Apr 2021
Explainable AI For COVID-19 CT Classifiers: An Initial Comparison Study
Explainable AI For COVID-19 CT Classifiers: An Initial Comparison Study
Qinghao Ye
Jun Xia
Guang Yang
98
60
0
25 Apr 2021
Sampling Permutations for Shapley Value Estimation
Sampling Permutations for Shapley Value Estimation
Rory Mitchell
Joshua N. Cooper
E. Frank
G. Holmes
106
122
0
25 Apr 2021
Explainable Artificial Intelligence Reveals Novel Insight into Tumor
  Microenvironment Conditions Linked with Better Prognosis in Patients with
  Breast Cancer
Explainable Artificial Intelligence Reveals Novel Insight into Tumor Microenvironment Conditions Linked with Better Prognosis in Patients with Breast Cancer
Debaditya Chakraborty
C. Ivan
P. Amero
Maliha Khan
Cristian Rodríguez-Aguayo
H. Basagaoglu
G. Lopez-Berestein
31
33
0
24 Apr 2021
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep
  learning representations with expert knowledge graphs: the MonuMAI cultural
  heritage use case
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Natalia Díaz Rodríguez
Alberto Lamas
Jules Sanchez
Gianni Franchi
Ivan Donadello
Siham Tabik
David Filliat
P. Cruz
Rosana Montes
Francisco Herrera
140
78
0
24 Apr 2021
Grouped Feature Importance and Combined Features Effect Plot
Grouped Feature Importance and Combined Features Effect Plot
Quay Au
J. Herbinger
Clemens Stachl
B. Bischl
Giuseppe Casalicchio
FAtt
100
47
0
23 Apr 2021
Interpretation of multi-label classification models using shapley values
Interpretation of multi-label classification models using shapley values
Shikun Chen
FAttTDI
88
10
0
21 Apr 2021
Rule Generation for Classification: Scalability, Interpretability, and Fairness
Rule Generation for Classification: Scalability, Interpretability, and Fairness
Tabea E. Rober
Adia C. Lumadjeng
M. Akyuz
cS. .Ilker Birbil
126
2
0
21 Apr 2021
Revisiting The Evaluation of Class Activation Mapping for
  Explainability: A Novel Metric and Experimental Analysis
Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis
Samuele Poppi
Marcella Cornia
Lorenzo Baraldi
Rita Cucchiara
FAtt
199
34
0
20 Apr 2021
Interpretability in deep learning for finance: a case study for the
  Heston model
Interpretability in deep learning for finance: a case study for the Heston model
D. Brigo
Xiaoshan Huang
A. Pallavicini
Haitz Sáez de Ocáriz Borde
FAtt
37
9
0
19 Apr 2021
Machine learning approach to dynamic risk modeling of mortality in
  COVID-19: a UK Biobank study
Machine learning approach to dynamic risk modeling of mortality in COVID-19: a UK Biobank study
M. Dabbah
Angus B. Reed
A. Booth
A. Yassaee
A. Despotovic
...
Emily Binning
M. Aral
D. Plans
A. Labrique
D. Mohan
61
18
0
19 Apr 2021
Improving Attribution Methods by Learning Submodular Functions
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
94
6
0
19 Apr 2021
DA-DGCEx: Ensuring Validity of Deep Guided Counterfactual Explanations
  With Distribution-Aware Autoencoder Loss
DA-DGCEx: Ensuring Validity of Deep Guided Counterfactual Explanations With Distribution-Aware Autoencoder Loss
Jokin Labaien
E. Zugasti
Xabier De Carlos
CML
66
4
0
19 Apr 2021
SurvNAM: The machine learning survival model explanation
SurvNAM: The machine learning survival model explanation
Lev V. Utkin
Egor D. Satyukov
A. Konstantinov
AAMLFAtt
93
30
0
18 Apr 2021
GraphSVX: Shapley Value Explanations for Graph Neural Networks
GraphSVX: Shapley Value Explanations for Graph Neural Networks
Alexandre Duval
Fragkiskos D. Malliaros
FAtt
90
92
0
18 Apr 2021
Previous
123...636465...787980
Next