ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07874
  4. Cited By
A Unified Approach to Interpreting Model Predictions
v1v2 (latest)

A Unified Approach to Interpreting Model Predictions

22 May 2017
Scott M. Lundberg
Su-In Lee
    FAtt
ArXiv (abs)PDFHTML

Papers citing "A Unified Approach to Interpreting Model Predictions"

50 / 3,945 papers shown
Title
Data Shapley Value for Handling Noisy Labels: An application in
  Screening COVID-19 Pneumonia from Chest CT Scans
Data Shapley Value for Handling Noisy Labels: An application in Screening COVID-19 Pneumonia from Chest CT Scans
Nastaran Enshaei
M. Rafiee
Arash Mohammadi
F. Naderkhani
NoLaTDI
40
2
0
17 Oct 2021
TorchEsegeta: Framework for Interpretability and Explainability of
  Image-based Deep Learning Models
TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models
S. Chatterjee
Arnab Das
Chirag Mandal
Budhaditya Mukhopadhyay
Manish Vipinraj
Aniruddh Shukla
R. Rao
Chompunuch Sarasaen
Oliver Speck
A. Nürnberger
MedIm
78
15
0
16 Oct 2021
Evaluating the Faithfulness of Importance Measures in NLP by Recursively
  Masking Allegedly Important Tokens and Retraining
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
157
37
0
15 Oct 2021
Using Psychological Characteristics of Situations for Social Situation
  Comprehension in Support Agents
Using Psychological Characteristics of Situations for Social Situation Comprehension in Support Agents
Ilir Kola
Catholijn M. Jonker
M. Birna van Riemsdijk
70
6
0
15 Oct 2021
Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps
  and Relevance Orderings
Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings
Jan Macdonald
Mathieu Besançon
Sebastian Pokutta
64
12
0
15 Oct 2021
The Irrationality of Neural Rationale Models
The Irrationality of Neural Rationale Models
Yiming Zheng
Serena Booth
J. Shah
Yilun Zhou
120
17
0
14 Oct 2021
Bond Default Prediction with Text Embeddings, Undersampling and Deep
  Learning
Bond Default Prediction with Text Embeddings, Undersampling and Deep Learning
Luke Jordan
21
0
0
13 Oct 2021
E-Commerce Dispute Resolution Prediction
E-Commerce Dispute Resolution Prediction
David Tsurel
Michael Doron
A. Nus
Arnon Dagan
Ido Guy
Dafna Shahaf
24
11
0
13 Oct 2021
Logic Constraints to Feature Importances
Logic Constraints to Feature Importances
Nicola Picchiotti
Marco Gori
52
0
0
13 Oct 2021
Clustering-Based Interpretation of Deep ReLU Network
Clustering-Based Interpretation of Deep ReLU Network
Nicola Picchiotti
Marco Gori
FAtt
22
0
0
13 Oct 2021
A Survey on Legal Question Answering Systems
A Survey on Legal Question Answering Systems
J. Martinez-Gil
AILawELM
92
29
0
12 Oct 2021
A Rate-Distortion Framework for Explaining Black-box Model Decisions
A Rate-Distortion Framework for Explaining Black-box Model Decisions
Stefan Kolek
Duc Anh Nguyen
Ron Levie
Joan Bruna
Gitta Kutyniok
97
16
0
12 Oct 2021
You Mostly Walk Alone: Analyzing Feature Attribution in Trajectory
  Prediction
You Mostly Walk Alone: Analyzing Feature Attribution in Trajectory Prediction
Osama Makansi
Julius von Kügelgen
Francesco Locatello
Peter V. Gehler
Dominik Janzing
Thomas Brox
Bernhard Schölkopf
FAtt
88
29
0
11 Oct 2021
Global Explainability of BERT-Based Evaluation Metrics by Disentangling
  along Linguistic Factors
Global Explainability of BERT-Based Evaluation Metrics by Disentangling along Linguistic Factors
Marvin Kaster
Wei Zhao
Steffen Eger
111
26
0
08 Oct 2021
The Eval4NLP Shared Task on Explainable Quality Estimation: Overview and
  Results
The Eval4NLP Shared Task on Explainable Quality Estimation: Overview and Results
M. Fomicheva
Piyawat Lertvittayakumjorn
Wei Zhao
Steffen Eger
Yang Gao
ELM
97
41
0
08 Oct 2021
Explainability-Aware One Point Attack for Point Cloud Neural Networks
Explainability-Aware One Point Attack for Point Cloud Neural Networks
Hanxiao Tan
Helena Kotthaus
3DPCAAML
84
11
0
08 Oct 2021
Opportunities for Machine Learning to Accelerate Halide Perovskite
  Commercialization and Scale-Up
Opportunities for Machine Learning to Accelerate Halide Perovskite Commercialization and Scale-Up
Rishi E. Kumar
A. Tiihonen
Shijing Sun
D. Fenning
Zhe Liu
Tonio Buonassisi
44
11
0
08 Oct 2021
Robotic Lever Manipulation using Hindsight Experience Replay and Shapley
  Additive Explanations
Robotic Lever Manipulation using Hindsight Experience Replay and Shapley Additive Explanations
Sindre Benjamin Remman
A. Lekkas
55
14
0
07 Oct 2021
Compositional Q-learning for electrolyte repletion with imbalanced
  patient sub-populations
Compositional Q-learning for electrolyte repletion with imbalanced patient sub-populations
Aishwarya Mandyam
Andrew Jones
Jiayu Yao
K. Laudanski
Barbara E. Engelhardt
OffRL
80
0
0
06 Oct 2021
Shapley variable importance clouds for interpretable machine learning
Shapley variable importance clouds for interpretable machine learning
Yilin Ning
M. Ong
Bibhas Chakraborty
B. Goldstein
Daniel Ting
Roger Vaughan
Nan Liu
FAtt
77
74
0
06 Oct 2021
Unpacking the Black Box: Regulating Algorithmic Decisions
Unpacking the Black Box: Regulating Algorithmic Decisions
Laura Blattner
Scott Nelson
Jann Spiess
MLAUFaML
80
19
0
05 Oct 2021
Deep Neural Networks and Tabular Data: A Survey
Deep Neural Networks and Tabular Data: A Survey
V. Borisov
Tobias Leemann
Kathrin Seßler
Johannes Haug
Martin Pawelczyk
Gjergji Kasneci
LMTD
152
708
0
05 Oct 2021
Fine-Grained Neural Network Explanation by Identifying Input Features
  with Predictive Information
Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Information
Yang Zhang
Ashkan Khakzar
Yawei Li
Azade Farshad
Seong Tae Kim
Nassir Navab
FAttXAI
102
29
0
04 Oct 2021
Collective eXplainable AI: Explaining Cooperative Strategies and Agent
  Contribution in Multiagent Reinforcement Learning with Shapley Values
Collective eXplainable AI: Explaining Cooperative Strategies and Agent Contribution in Multiagent Reinforcement Learning with Shapley Values
Alexandre Heuillet
Fabien Couthouis
Natalia Díaz Rodríguez
89
65
0
04 Oct 2021
Trustworthy AI: From Principles to Practices
Trustworthy AI: From Principles to Practices
Yue Liu
Peng Qi
Bo Liu
Shuai Di
Jingen Liu
Jiquan Pei
Jinfeng Yi
Bowen Zhou
213
384
0
04 Oct 2021
Algorithm Fairness in AI for Medicine and Healthcare
Algorithm Fairness in AI for Medicine and Healthcare
Richard J. Chen
Tiffany Y. Chen
Jana Lipkova
Judy J. Wang
Drew F. K. Williamson
Ming Y. Lu
S. Sahai
Faisal Mahmood
FaML
158
47
0
01 Oct 2021
LEMON: Explainable Entity Matching
LEMON: Explainable Entity Matching
Nils Barlaug
FAttAAML
62
9
0
01 Oct 2021
On the Trustworthiness of Tree Ensemble Explainability Methods
On the Trustworthiness of Tree Ensemble Explainability Methods
Angeline Yasodhara
Azin Asgarian
Diego Huang
Parinaz Sobhani
FAtt
121
5
0
30 Sep 2021
XPROAX-Local explanations for text classification with progressive
  neighborhood approximation
XPROAX-Local explanations for text classification with progressive neighborhood approximation
Yi Cai
Arthur Zimek
Eirini Ntoutsi
81
5
0
30 Sep 2021
Out-of-Distribution Detection for Medical Applications: Guidelines for
  Practical Evaluation
Out-of-Distribution Detection for Medical Applications: Guidelines for Practical Evaluation
Karina Zadorozhny
P. Thoral
Paul Elbers
Giovanni Cina
OODDOOD
86
15
0
30 Sep 2021
Posttraumatic Stress Disorder Hyperarousal Event Detection Using
  Smartwatch Physiological and Activity Data
Posttraumatic Stress Disorder Hyperarousal Event Detection Using Smartwatch Physiological and Activity Data
Mahnoosh Sadeghi
Anthony D. McDonald
Farzan Sasangohar
39
24
0
29 Sep 2021
An Explainable-AI approach for Diagnosis of COVID-19 using MALDI-ToF
  Mass Spectrometry
An Explainable-AI approach for Diagnosis of COVID-19 using MALDI-ToF Mass Spectrometry
V. Seethi
Z. LaCasse
P. Chivte
Joshua Bland
Shrihari S. Kadkol
E. Gaillard
Pratool Bharti
Hamed Alhoori
31
11
0
28 Sep 2021
Intelligent Decision Assistance Versus Automated Decision-Making:
  Enhancing Knowledge Work Through Explainable Artificial Intelligence
Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Work Through Explainable Artificial Intelligence
Max Schemmer
Niklas Kühl
G. Satzger
58
14
0
28 Sep 2021
Micromodels for Efficient, Explainable, and Reusable Systems: A Case
  Study on Mental Health
Micromodels for Efficient, Explainable, and Reusable Systems: A Case Study on Mental Health
Andrew Lee
Jonathan K. Kummerfeld
Lawrence C. An
Rada Mihalcea
98
24
0
28 Sep 2021
Multi-Semantic Image Recognition Model and Evaluating Index for
  explaining the deep learning models
Multi-Semantic Image Recognition Model and Evaluating Index for explaining the deep learning models
Qianmengke Zhao
Ye Wang
Qun Liu
AAMLVLMXAI
56
0
0
28 Sep 2021
Discriminative Attribution from Counterfactuals
Discriminative Attribution from Counterfactuals
N. Eckstein
A. S. Bates
G. Jefferis
Jan Funke
FAttCML
51
1
0
28 Sep 2021
Exploring The Role of Local and Global Explanations in Recommender
  Systems
Exploring The Role of Local and Global Explanations in Recommender Systems
Marissa Radensky
Doug Downey
Kyle Lo
Z. Popović
Daniel S. Weld University of Washington
LRM
90
22
0
27 Sep 2021
ML4ML: Automated Invariance Testing for Machine Learning Models
ML4ML: Automated Invariance Testing for Machine Learning Models
Zukang Liao
Pengfei Zhang
Min Chen
VLM
60
3
0
27 Sep 2021
Heterogeneous Treatment Effect Estimation using machine learning for
  Healthcare application: tutorial and benchmark
Heterogeneous Treatment Effect Estimation using machine learning for Healthcare application: tutorial and benchmark
Yaobin Ling
Pulakesh Upadhyaya
Luyao Chen
Xiaoqian Jiang
Yejin Kim
CML
167
21
0
27 Sep 2021
Combining Discrete Choice Models and Neural Networks through Embeddings:
  Formulation, Interpretability and Performance
Combining Discrete Choice Models and Neural Networks through Embeddings: Formulation, Interpretability and Performance
Ioanna Arkoudi
C. L. Azevedo
Francisco Câmara Pereira
78
18
0
24 Sep 2021
Understanding Spending Behavior: Recurrent Neural Network Explanation
  and Interpretation
Understanding Spending Behavior: Recurrent Neural Network Explanation and Interpretation
Charl Maree
C. Omlin
AI4TS
52
5
0
24 Sep 2021
AES Systems Are Both Overstable And Oversensitive: Explaining Why And
  Proposing Defenses
AES Systems Are Both Overstable And Oversensitive: Explaining Why And Proposing Defenses
Yaman Kumar Singla
Swapnil Parekh
Somesh Singh
Junjie Li
R. Shah
Changyou Chen
AAML
90
14
0
24 Sep 2021
DeepAID: Interpreting and Improving Deep Learning-based Anomaly
  Detection in Security Applications
DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications
Dongqi Han
Zhiliang Wang
Wenqi Chen
Ying Zhong
Su Wang
Han Zhang
Jiahai Yang
Xingang Shi
Xia Yin
AAML
67
81
0
23 Sep 2021
Interpretable Directed Diversity: Leveraging Model Explanations for
  Iterative Crowd Ideation
Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation
Yunlong Wang
Priyadarshini Venkatesh
Brian Y. Lim
137
21
0
21 Sep 2021
Fast TreeSHAP: Accelerating SHAP Value Computation for Trees
Fast TreeSHAP: Accelerating SHAP Value Computation for Trees
Jilei Yang
FAtt
105
37
0
20 Sep 2021
Counterfactual Instances Explain Little
Counterfactual Instances Explain Little
Adam White
Artur Garcez
CML
48
5
0
20 Sep 2021
FUTURE-AI: Guiding Principles and Consensus Recommendations for
  Trustworthy Artificial Intelligence in Medical Imaging
FUTURE-AI: Guiding Principles and Consensus Recommendations for Trustworthy Artificial Intelligence in Medical Imaging
Karim Lekadira
Richard Osuala
C. Gallin
Noussair Lazrak
Kaisar Kushibar
...
Nickolas Papanikolaou
Zohaib Salahuddin
Henry C. Woodruff
Philippe Lambin
L. Martí-Bonmatí
AI4TS
163
62
0
20 Sep 2021
Some Critical and Ethical Perspectives on the Empirical Turn of AI
  Interpretability
Some Critical and Ethical Perspectives on the Empirical Turn of AI Interpretability
Jean-Marie John-Mathews
79
34
0
20 Sep 2021
Machine Learning-Based COVID-19 Patients Triage Algorithm using
  Patient-Generated Health Data from Nationwide Multicenter Database
Machine Learning-Based COVID-19 Patients Triage Algorithm using Patient-Generated Health Data from Nationwide Multicenter Database
Min Sue Park
Hyeontae Jo
Haeun Lee
S. Jung
H. Hwang
72
14
0
18 Sep 2021
TS-MULE: Local Interpretable Model-Agnostic Explanations for Time Series
  Forecast Models
TS-MULE: Local Interpretable Model-Agnostic Explanations for Time Series Forecast Models
U. Schlegel
D. Lam
Daniel A. Keim
Daniel Seebacher
FAttAI4TS
110
32
0
17 Sep 2021
Previous
123...585960...777879
Next