ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07874
  4. Cited By
A Unified Approach to Interpreting Model Predictions
v1v2 (latest)

A Unified Approach to Interpreting Model Predictions

22 May 2017
Scott M. Lundberg
Su-In Lee
    FAtt
ArXiv (abs)PDFHTML

Papers citing "A Unified Approach to Interpreting Model Predictions"

50 / 3,941 papers shown
Title
Local Explanations for Reinforcement Learning
Local Explanations for Reinforcement Learning
Ronny Luss
Amit Dhurandhar
Miao Liu
FAttOffRL
81
3
0
08 Feb 2022
Investigating the fidelity of explainable artificial intelligence
  methods for applications of convolutional neural networks in geoscience
Investigating the fidelity of explainable artificial intelligence methods for applications of convolutional neural networks in geoscience
Antonios Mamalakis
E. Barnes
I. Ebert‐Uphoff
92
77
0
07 Feb 2022
Introducing explainable supervised machine learning into interactive
  feedback loops for statistical production system
Introducing explainable supervised machine learning into interactive feedback loops for statistical production system
Carlos Mougan
G. Kanellos
Johannes Micheler
Jose Martinez
Thomas Gottron
60
1
0
07 Feb 2022
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
146
10
0
07 Feb 2022
Machine Learning Aided Holistic Handover Optimization for Emerging
  Networks
Machine Learning Aided Holistic Handover Optimization for Emerging Networks
M. Farooq
Marvin Manalastas
Syed Muhammad Asad Zaidi
A. Abu-Dayya
A. Imran
8
12
0
06 Feb 2022
A Game-theoretic Understanding of Repeated Explanations in ML Models
Kavita Kumari
Murtuza Jadliwala
Sumit Kumar Jha
U. Oklahoma
FAtt
22
0
0
05 Feb 2022
The influence of labeling techniques in classifying human manipulation
  movement of different speed
The influence of labeling techniques in classifying human manipulation movement of different speed
Sadique Adnan Siddiqui
L. Gutzeit
Frank Kirchner
24
0
0
04 Feb 2022
The impact of feature importance methods on the interpretation of defect
  classifiers
The impact of feature importance methods on the interpretation of defect classifiers
Gopi Krishnan Rajbahadur
Shaowei Wang
Yasutaka Kamei
Ahmed E. Hassan
FAtt
57
83
0
04 Feb 2022
Towards a consistent interpretation of AIOps models
Towards a consistent interpretation of AIOps models
Yingzhe Lyu
Gopi Krishnan Rajbahadur
Dayi Lin
Boyuan Chen
Zhen Ming
Z. Jiang
AI4CE
79
22
0
04 Feb 2022
Rethinking Explainability as a Dialogue: A Practitioner's Perspective
Rethinking Explainability as a Dialogue: A Practitioner's Perspective
Himabindu Lakkaraju
Dylan Slack
Yuxin Chen
Chenhao Tan
Sameer Singh
LRM
110
64
0
03 Feb 2022
Who will Leave a Pediatric Weight Management Program and When? -- A
  machine learning approach for predicting attrition patterns
Who will Leave a Pediatric Weight Management Program and When? -- A machine learning approach for predicting attrition patterns
Hamed Fayyaz
T. Phan
H. Bunnell
Rahmatollah Beheshti
MU
29
1
0
03 Feb 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
284
197
0
03 Feb 2022
Fairness of Machine Learning Algorithms in Demography
Fairness of Machine Learning Algorithms in Demography
I. Emmanuel
E. Mitrofanova
FaML
63
0
0
02 Feb 2022
Hierarchical Shrinkage: improving the accuracy and interpretability of
  tree-based methods
Hierarchical Shrinkage: improving the accuracy and interpretability of tree-based methods
Abhineet Agarwal
Yan Shuo Tan
Omer Ronen
Chandan Singh
Bin Yu
95
27
0
02 Feb 2022
Datamodels: Predicting Predictions from Training Data
Datamodels: Predicting Predictions from Training Data
Andrew Ilyas
Sung Min Park
Logan Engstrom
Guillaume Leclerc
Aleksander Madry
TDI
141
143
0
01 Feb 2022
A Consistent and Efficient Evaluation Strategy for Attribution Methods
A Consistent and Efficient Evaluation Strategy for Attribution Methods
Yao Rong
Tobias Leemann
V. Borisov
Gjergji Kasneci
Enkelejda Kasneci
FAtt
103
98
0
01 Feb 2022
Exploring layerwise decision making in DNNs
Exploring layerwise decision making in DNNs
Coenraad Mouton
Marelie Hattingh Davel
FAttAI4CE
13
2
0
01 Feb 2022
A pilot study of the Earable device to measure facial muscle and eye
  movement tasks among healthy volunteers
A pilot study of the Earable device to measure facial muscle and eye movement tasks among healthy volunteers
M. Wipperman
Galen Pogoncheff
Katrina F. Mateo
Xuefang Wu
Yiziying Chen
...
R. Deterding
S. Hamon
Tam Vu
Rinol Alaj
Olivier Harari
53
4
0
01 Feb 2022
Deconfounded Representation Similarity for Comparison of Neural Networks
Deconfounded Representation Similarity for Comparison of Neural Networks
Tianyu Cui
Yogesh Kumar
Pekka Marttinen
Samuel Kaski
CML
108
17
0
31 Jan 2022
Metrics for saliency map evaluation of deep learning explanation methods
Metrics for saliency map evaluation of deep learning explanation methods
T. Gomez
Thomas Fréour
Harold Mouchère
XAIFAtt
134
45
0
31 Jan 2022
POTATO: exPlainable infOrmation exTrAcTion framewOrk
POTATO: exPlainable infOrmation exTrAcTion framewOrk
Adam Kovacs
Kinga Gémes
Eszter Iklódi
Gábor Recski
70
4
0
31 Jan 2022
Causal Explanations and XAI
Causal Explanations and XAI
Sander Beckers
CMLXAI
88
36
0
31 Jan 2022
GStarX: Explaining Graph Neural Networks with Structure-Aware
  Cooperative Games
GStarX: Explaining Graph Neural Networks with Structure-Aware Cooperative Games
Shichang Zhang
Yozen Liu
Neil Shah
Yizhou Sun
FAtt
124
48
0
28 Jan 2022
Rethinking Attention-Model Explainability through Faithfulness Violation
  Test
Rethinking Attention-Model Explainability through Faithfulness Violation Test
Yebin Liu
Haoliang Li
Yangyang Guo
Chen Kong
Jing Li
Shiqi Wang
FAtt
183
43
0
28 Jan 2022
Vision Checklist: Towards Testable Error Analysis of Image Models to
  Help System Designers Interrogate Model Capabilities
Vision Checklist: Towards Testable Error Analysis of Image Models to Help System Designers Interrogate Model Capabilities
Xin Du
Bénédicte Legastelois
B. Ganesh
A. Rajan
Hana Chockler
Vaishak Belle
Stuart Anderson
S. Ramamoorthy
AAML
82
6
0
27 Jan 2022
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
Alon Jacovi
Jasmijn Bastings
Sebastian Gehrmann
Yoav Goldberg
Katja Filippova
141
17
0
27 Jan 2022
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial
  Contexts
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts
Sebastian Bordt
Michèle Finck
Eric Raidl
U. V. Luxburg
AILaw
108
79
0
25 Jan 2022
On the Robustness of Sparse Counterfactual Explanations to Adverse
  Perturbations
On the Robustness of Sparse Counterfactual Explanations to Adverse Perturbations
M. Virgolin
Saverio Fracaros
CML
92
36
0
22 Jan 2022
Causal effect of racial bias in data and machine learning algorithms on user persuasiveness & discriminatory decision making: An Empirical Study
Kinshuk Sengupta
Praveen Ranjan Srivastava
74
6
0
22 Jan 2022
RamanNet: A generalized neural network architecture for Raman Spectrum
  Analysis
RamanNet: A generalized neural network architecture for Raman Spectrum Analysis
Nabil Ibtehaz
M. Chowdhury
Amith Khandakar
S. Zughaier
S. Kiranyaz
Mohammad Sohel Rahman
39
25
0
20 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic
  Review on Evaluating Explainable AI
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELMXAI
183
423
0
20 Jan 2022
Learning-From-Disagreement: A Model Comparison and Visual Analytics
  Framework
Learning-From-Disagreement: A Model Comparison and Visual Analytics Framework
Junpeng Wang
Liang Wang
Yan Zheng
Chin-Chia Michael Yeh
Shubham Jain
Wei Zhang
FAtt
89
14
0
19 Jan 2022
Visual Exploration of Machine Learning Model Behavior with Hierarchical
  Surrogate Rule Sets
Visual Exploration of Machine Learning Model Behavior with Hierarchical Surrogate Rule Sets
Jun Yuan
Brian Barr
Kyle Overton
E. Bertini
50
11
0
19 Jan 2022
Socioeconomic disparities and COVID-19: the causal connections
Socioeconomic disparities and COVID-19: the causal connections
Tannista Banerjee
Ayan Paul
Vishak Srikanth
Inga Strümke
30
2
0
18 Jan 2022
Do not rug on me: Zero-dimensional Scam Detection
Do not rug on me: Zero-dimensional Scam Detection
Bruno Mazorra
Victor Adan
Vanesa Daza
55
10
0
16 Jan 2022
Towards Zero-shot Sign Language Recognition
Towards Zero-shot Sign Language Recognition
Yunus Can Bilge
R. G. Cinbis
Nazli Ikizler-Cinbis
SLR
58
36
0
15 Jan 2022
Fighting Money Laundering with Statistics and Machine Learning
Fighting Money Laundering with Statistics and Machine Learning
R. Jensen
Alexandros Iosifidis
88
16
0
11 Jan 2022
Explaining Predictive Uncertainty by Looking Back at Model Explanations
Explaining Predictive Uncertainty by Looking Back at Model Explanations
Hanjie Chen
Wanyu Du
Yangfeng Ji
127
2
0
11 Jan 2022
A novel interpretable machine learning system to generate clinical risk
  scores: An application for predicting early mortality or unplanned
  readmission in a retrospective cohort study
A novel interpretable machine learning system to generate clinical risk scores: An application for predicting early mortality or unplanned readmission in a retrospective cohort study
Yilin Ning
Siqi Li
M. Ong
F. Xie
Bibhas Chakraborty
Daniel Ting
Nan Liu
FAtt
60
23
0
10 Jan 2022
Applying Machine Learning and AI Explanations to Analyze Vaccine
  Hesitancy
Applying Machine Learning and AI Explanations to Analyze Vaccine Hesitancy
C. Lange
J. Lange
32
1
0
07 Jan 2022
Topological Representations of Local Explanations
Topological Representations of Local Explanations
Peter Xenopoulos
G. Chan
Harish Doraiswamy
L. G. Nonato
Brian Barr
Claudio Silva
FAtt
94
4
0
06 Jan 2022
BITES: Balanced Individual Treatment Effect for Survival data
BITES: Balanced Individual Treatment Effect for Survival data
Stefan Schrod
Andreas Schäfer
S. Solbrig
R. Lohmayer
W. Gronwald
P. Oefner
T. Beissbarth
Rainer Spang
H. Zacharias
Michael Altenbuchinger
CML
58
23
0
05 Jan 2022
Improving Deep Neural Network Classification Confidence using
  Heatmap-based eXplainable AI
Improving Deep Neural Network Classification Confidence using Heatmap-based eXplainable AI
Erico Tjoa
Hong Jing Khok
Tushar Chouhan
G. Cuntai
FAtt
76
4
0
30 Dec 2021
Shallow decision trees for explainable $k$-means clustering
Shallow decision trees for explainable kkk-means clustering
E. Laber
Lucas Murtinho
F. Oliveira
62
26
0
29 Dec 2021
Explainability Is in the Mind of the Beholder: Establishing the
  Foundations of Explainable Artificial Intelligence
Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence
Kacper Sokol
Peter A. Flach
80
21
0
29 Dec 2021
Explainable Artificial Intelligence for Pharmacovigilance: What Features
  Are Important When Predicting Adverse Outcomes?
Explainable Artificial Intelligence for Pharmacovigilance: What Features Are Important When Predicting Adverse Outcomes?
I. Ward
Ling Wang
Juan Lu
M. Bennamoun
Girish Dwivedi
Frank M. Sanfilippo
172
35
0
25 Dec 2021
Explainable Artificial Intelligence Methods in Combating Pandemics: A
  Systematic Review
Explainable Artificial Intelligence Methods in Combating Pandemics: A Systematic Review
F. Giuste
Wenqi Shi
Yuanda Zhu
Tarun Naren
Monica Isgut
Ying Sha
L. Tong
Mitali S. Gupte
May D. Wang
116
74
0
23 Dec 2021
Prolog-based agnostic explanation module for structured pattern
  classification
Prolog-based agnostic explanation module for structured pattern classification
Gonzalo Nápoles
Fabian Hoitsma
A. Knoben
A. Jastrzębska
Maikel Leon Espinosa
81
13
0
23 Dec 2021
AcME -- Accelerated Model-agnostic Explanations: Fast Whitening of the
  Machine-Learning Black Box
AcME -- Accelerated Model-agnostic Explanations: Fast Whitening of the Machine-Learning Black Box
David Dandolo
Chiara Masiero
Mattia Carletti
Davide Dalle Pezze
Gian Antonio Susto
FAttLRM
62
23
0
23 Dec 2021
More Than Words: Towards Better Quality Interpretations of Text
  Classifiers
More Than Words: Towards Better Quality Interpretations of Text Classifiers
Muhammad Bilal Zafar
Philipp Schmidt
Michele Donini
Cédric Archambeau
F. Biessmann
Sanjiv Ranjan Das
K. Kenthapadi
FAtt
115
5
0
23 Dec 2021
Previous
123...555657...777879
Next