ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07874
  4. Cited By
A Unified Approach to Interpreting Model Predictions
v1v2 (latest)

A Unified Approach to Interpreting Model Predictions

22 May 2017
Scott M. Lundberg
Su-In Lee
    FAtt
ArXiv (abs)PDFHTML

Papers citing "A Unified Approach to Interpreting Model Predictions"

50 / 3,942 papers shown
Title
Scrutinizing XAI using linear ground-truth data with suppressor
  variables
Scrutinizing XAI using linear ground-truth data with suppressor variables
Rick Wilming
Céline Budding
K. Müller
Stefan Haufe
FAtt
68
26
0
14 Nov 2021
A Robust Unsupervised Ensemble of Feature-Based Explanations using
  Restricted Boltzmann Machines
A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines
V. Borisov
Johannes Meier
J. V. D. Heuvel
Hamed Jalali
Gjergji Kasneci
FAtt
72
5
0
14 Nov 2021
A Practical guide on Explainable AI Techniques applied on Biomedical use
  case applications
A Practical guide on Explainable AI Techniques applied on Biomedical use case applications
Adrien Bennetot
Ivan Donadello
Ayoub El Qadi
M. Dragoni
Thomas Frossard
...
M. Trocan
Raja Chatila
Andreas Holzinger
Artur Garcez
Natalia Díaz Rodríguez
XAI
68
9
0
13 Nov 2021
LoMEF: A Framework to Produce Local Explanations for Global Model Time
  Series Forecasts
LoMEF: A Framework to Produce Local Explanations for Global Model Time Series Forecasts
Dilini Sewwandi Rajapaksha
Christoph Bergmeir
Rob J. Hyndman
FAttAI4TS
36
14
0
13 Nov 2021
Learning Interpretation with Explainable Knowledge Distillation
Learning Interpretation with Explainable Knowledge Distillation
Raed Alharbi
Minh Nhat Vu
My T. Thai
70
15
0
12 Nov 2021
Explainable AI for Psychological Profiling from Digital Footprints: A
  Case Study of Big Five Personality Predictions from Spending Data
Explainable AI for Psychological Profiling from Digital Footprints: A Case Study of Big Five Personality Predictions from Spending Data
Yanou Ramon
S. Matz
R. Farrokhnia
David Martens
51
19
0
12 Nov 2021
Discovering and Explaining the Representation Bottleneck of DNNs
Discovering and Explaining the Representation Bottleneck of DNNs
Huiqi Deng
Qihan Ren
Hao Zhang
Quanshi Zhang
124
61
0
11 Nov 2021
Beyond Importance Scores: Interpreting Tabular ML by Visualizing Feature
  Semantics
Beyond Importance Scores: Interpreting Tabular ML by Visualizing Feature Semantics
Amirata Ghorbani
Dina Berenbaum
Maor Ivgi
Yuval Dafna
James Zou
FAtt
54
9
0
10 Nov 2021
Data-Driven AI Model Signal-Awareness Enhancement and Introspection
Data-Driven AI Model Signal-Awareness Enhancement and Introspection
Sahil Suneja
Yufan Zhuang
Yunhui Zheng
Jim Laredo
Alessandro Morari
SyDa
105
1
0
10 Nov 2021
Self-Interpretable Model with TransformationEquivariant Interpretation
Self-Interpretable Model with TransformationEquivariant Interpretation
Yipei Wang
Xiaoqian Wang
77
23
0
09 Nov 2021
Consistent Sufficient Explanations and Minimal Local Rules for
  explaining regression and classification models
Consistent Sufficient Explanations and Minimal Local Rules for explaining regression and classification models
Salim I. Amoukou
Nicolas Brunel
FAttLRM
91
5
0
08 Nov 2021
Defense Against Explanation Manipulation
Defense Against Explanation Manipulation
Ruixiang Tang
Ninghao Liu
Fan Yang
Na Zou
Helen Zhou
AAML
91
12
0
08 Nov 2021
AI challenges for predicting the impact of mutations on protein
  stability
AI challenges for predicting the impact of mutations on protein stability
F. Pucci
Martin Schwersensky
M. Rooman
56
1
0
08 Nov 2021
Data-Centric Engineering: integrating simulation, machine learning and
  statistics. Challenges and Opportunities
Data-Centric Engineering: integrating simulation, machine learning and statistics. Challenges and Opportunities
Indranil Pan
L. Mason
Omar K. Matar
AI4CE
105
46
0
07 Nov 2021
"How Does It Detect A Malicious App?" Explaining the Predictions of
  AI-based Android Malware Detector
"How Does It Detect A Malicious App?" Explaining the Predictions of AI-based Android Malware Detector
Zhi Lu
V. Thing
AAML
46
4
0
06 Nov 2021
Interpreting Representation Quality of DNNs for 3D Point Cloud
  Processing
Interpreting Representation Quality of DNNs for 3D Point Cloud Processing
Wen Shen
Qihan Ren
Dongrui Liu
Quanshi Zhang
3DPC
144
18
0
05 Nov 2021
Visualizing the Emergence of Intermediate Visual Patterns in DNNs
Visualizing the Emergence of Intermediate Visual Patterns in DNNs
Mingjie Li
Shaobo Wang
Quanshi Zhang
99
10
0
05 Nov 2021
Causal versus Marginal Shapley Values for Robotic Lever Manipulation
  Controlled using Deep Reinforcement Learning
Causal versus Marginal Shapley Values for Robotic Lever Manipulation Controlled using Deep Reinforcement Learning
Sindre Benjamin Remman
Inga Strümke
A. Lekkas
CML
48
7
0
04 Nov 2021
Convolutional Motif Kernel Networks
Convolutional Motif Kernel Networks
Jonas C. Ditz
Bernhard Reuter
N. Pfeifer
FAtt
81
2
0
03 Nov 2021
Exploring Explainable AI in the Financial Sector: Perspectives of Banks
  and Supervisory Authorities
Exploring Explainable AI in the Financial Sector: Perspectives of Banks and Supervisory Authorities
O. Kuiper
M. V. D. Berg
Joost van den Burgt
S. Leijnen
59
19
0
03 Nov 2021
Decision Support Models for Predicting and Explaining Airport Passenger
  Connectivity from Data
Decision Support Models for Predicting and Explaining Airport Passenger Connectivity from Data
Marta Guimarães
Cláudia Soares
Rodrigo V. Ventura
28
2
0
02 Nov 2021
Designing Inherently Interpretable Machine Learning Models
Designing Inherently Interpretable Machine Learning Models
Agus Sudjianto
Aijun Zhang
FaML
53
32
0
02 Nov 2021
Recent Advances in Natural Language Processing via Large Pre-Trained
  Language Models: A Survey
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
Bonan Min
Hayley L Ross
Elior Sulem
Amir Pouran Ben Veyseh
Thien Huu Nguyen
Oscar Sainz
Eneko Agirre
Ilana Heinz
Dan Roth
LM&MAVLMAI4CE
197
1,103
0
01 Nov 2021
Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions
Low-Cost Algorithmic Recourse for Users With Uncertain Cost Functions
Prateek Yadav
Peter Hase
Joey Tianyi Zhou
68
11
0
01 Nov 2021
Provably efficient, succinct, and precise explanations
Provably efficient, succinct, and precise explanations
Guy Blanc
Jane Lange
Li-Yang Tan
FAtt
136
38
0
01 Nov 2021
Comparative Explanations of Recommendations
Comparative Explanations of Recommendations
Aobo Yang
Nan Wang
Renqin Cai
Hongbo Deng
Hongning Wang
133
12
0
01 Nov 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A
  Review of Interpretability Methods
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
90
322
0
01 Nov 2021
A Survey on the Robustness of Feature Importance and Counterfactual
  Explanations
A Survey on the Robustness of Feature Importance and Counterfactual Explanations
Saumitra Mishra
Sanghamitra Dutta
Jason Long
Daniele Magazzeni
AAML
86
58
0
30 Oct 2021
Towards Comparative Physical Interpretation of Spatial Variability Aware
  Neural Networks: A Summary of Results
Towards Comparative Physical Interpretation of Spatial Variability Aware Neural Networks: A Summary of Results
Jayant Gupta
Carl Molnar
Gaoxiang Luo
Joe Knight
Shashi Shekhar
44
0
0
29 Oct 2021
Holistic Deep Learning
Holistic Deep Learning
Dimitris Bertsimas
Kimberly Villalobos Carballo
L. Boussioux
M. Li
Alex Paskov
I. Paskov
94
3
0
29 Oct 2021
Explaining Latent Representations with a Corpus of Examples
Explaining Latent Representations with a Corpus of Examples
Jonathan Crabbé
Zhaozhi Qian
F. Imrie
M. Schaar
FAtt
84
38
0
28 Oct 2021
XDEEP-MSI: Explainable Bias-Rejecting Microsatellite Instability Deep
  Learning System In Colorectal Cancer
XDEEP-MSI: Explainable Bias-Rejecting Microsatellite Instability Deep Learning System In Colorectal Cancer
A. Bustos
A. Payá
A. Torrubia
R. Jover
X. Llor
X. Bessa
A. Castells
C. Alenda
53
12
0
28 Oct 2021
Perceptual Score: What Data Modalities Does Your Model Perceive?
Perceptual Score: What Data Modalities Does Your Model Perceive?
Itai Gat
Idan Schwartz
Alex Schwing
99
32
0
27 Oct 2021
Counterfactual Shapley Additive Explanations
Counterfactual Shapley Additive Explanations
Emanuele Albini
Jason Long
Danial Dervovic
Daniele Magazzeni
109
51
0
27 Oct 2021
Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for
  Machine Learning
Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning
Yongchan Kwon
James Zou
TDI
99
135
0
26 Oct 2021
Reliable and Trustworthy Machine Learning for Health Using Dataset Shift
  Detection
Reliable and Trustworthy Machine Learning for Health Using Dataset Shift Detection
Chunjong Park
Anas Awadalla
Tadayoshi Kohno
Shwetak N. Patel
OOD
79
30
0
26 Oct 2021
Understanding Interlocking Dynamics of Cooperative Rationalization
Understanding Interlocking Dynamics of Cooperative Rationalization
Mo Yu
Yang Zhang
Shiyu Chang
Tommi Jaakkola
94
44
0
26 Oct 2021
Partial Order in Chaos: Consensus on Feature Attributions in the
  Rashomon Set
Partial Order in Chaos: Consensus on Feature Attributions in the Rashomon Set
Gabriel Laberge
Y. Pequignot
Alexandre Mathieu
Foutse Khomh
M. Marchand
FAtt
75
6
0
26 Oct 2021
Double Trouble: How to not explain a text classifier's decisions using
  counterfactuals synthesized by masked language models?
Double Trouble: How to not explain a text classifier's decisions using counterfactuals synthesized by masked language models?
Thang M. Pham
Trung H. Bui
Long Mai
Anh Totti Nguyen
115
7
0
22 Oct 2021
Explainable Landscape-Aware Optimization Performance Prediction
Explainable Landscape-Aware Optimization Performance Prediction
R. Trajanov
Stefan Dimeski
Martin Popovski
Peter Korošec
T. Eftimov
71
16
0
22 Oct 2021
ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable
  AI
ProtoShotXAI: Using Prototypical Few-Shot Architecture for Explainable AI
Samuel Hess
G. Ditzler
AAML
89
1
0
22 Oct 2021
Text Counterfactuals via Latent Optimization and Shapley-Guided Search
Text Counterfactuals via Latent Optimization and Shapley-Guided Search
Quintin Pope
Xiaoli Z. Fern
64
20
0
22 Oct 2021
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Human-Centered Explainable AI (XAI): From Algorithms to User Experiences
Q. V. Liao
R. Varshney
133
234
0
20 Oct 2021
Multi-concept adversarial attacks
Multi-concept adversarial attacks
Vibha Belavadi
Yan Zhou
Murat Kantarcioglu
B. Thuraisingham
AAML
85
0
0
19 Oct 2021
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised
  deep learning
Coalitional Bayesian Autoencoders -- Towards explainable unsupervised deep learning
Bang Xiang Yong
Alexandra Brintrup
61
7
0
19 Oct 2021
EEGminer: Discovering Interpretable Features of Brain Activity with
  Learnable Filters
EEGminer: Discovering Interpretable Features of Brain Activity with Learnable Filters
Siegfried Ludwig
Stylianos Bakas
Dimitrios A. Adamos
N. Laskaris
Yannis Panagakis
Stefanos Zafeiriou
65
7
0
19 Oct 2021
AequeVox: Automated Fairness Testing of Speech Recognition Systems
AequeVox: Automated Fairness Testing of Speech Recognition Systems
Sai Sathiesh Rajan
Sakshi Udeshi
Sudipta Chattopadhyay
121
15
0
19 Oct 2021
Efficient Analysis of COVID-19 Clinical Data using Machine Learning
  Models
Efficient Analysis of COVID-19 Clinical Data using Machine Learning Models
Sarwan Ali
Yijing Zhou
M. Patterson
OOD
101
32
0
18 Oct 2021
RKHS-SHAP: Shapley Values for Kernel Methods
RKHS-SHAP: Shapley Values for Kernel Methods
Siu Lun Chau
Robert Hu
Javier I. González
Dino Sejdinovic
FAtt
83
20
0
18 Oct 2021
Schrödinger's Tree -- On Syntax and Neural Language Models
Schrödinger's Tree -- On Syntax and Neural Language Models
Artur Kulmizev
Joakim Nivre
77
6
0
17 Oct 2021
Previous
123...575859...777879
Next