ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07874
  4. Cited By
A Unified Approach to Interpreting Model Predictions
v1v2 (latest)

A Unified Approach to Interpreting Model Predictions

22 May 2017
Scott M. Lundberg
Su-In Lee
    FAtt
ArXiv (abs)PDFHTML

Papers citing "A Unified Approach to Interpreting Model Predictions"

50 / 3,950 papers shown
Title
Local Explanation of Dialogue Response Generation
Local Explanation of Dialogue Response Generation
Yi-Lin Tuan
Connor Pryor
Wenhu Chen
Lise Getoor
Wenjie Wang
86
12
0
11 Jun 2021
Verifying Quantized Neural Networks using SMT-Based Model Checking
Verifying Quantized Neural Networks using SMT-Based Model Checking
Luiz Sena
Xidan Song
E. Alves
I. Bessa
Edoardo Manino
Lucas C. Cordeiro
Eddie Batista de Lima Filho
69
11
0
10 Jun 2021
On the overlooked issue of defining explanation objectives for
  local-surrogate explainers
On the overlooked issue of defining explanation objectives for local-surrogate explainers
Rafael Poyiadzi
X. Renard
Thibault Laugel
Raúl Santos-Rodríguez
Marcin Detyniecki
50
6
0
10 Jun 2021
A Deep Variational Approach to Clustering Survival Data
A Deep Variational Approach to Clustering Survival Data
Laura Manduchi
Ricards Marcinkevics
M. Massi
Thomas Weikert
Alexander Sauter
...
F. Vasella
M. Neidert
M. Pfister
Bram Stieltjes
Julia E. Vogt
146
31
0
10 Jun 2021
Explainable AI, but explainable to whom?
Explainable AI, but explainable to whom?
Julie Gerlings
Millie Søndergaard Jensen
Arisa Shollo
82
43
0
10 Jun 2021
Explaining Time Series Predictions with Dynamic Masks
Explaining Time Series Predictions with Dynamic Masks
Jonathan Crabbé
M. Schaar
FAttAI4TS
104
81
0
09 Jun 2021
Taxonomy of Machine Learning Safety: A Survey and Primer
Taxonomy of Machine Learning Safety: A Survey and Primer
Sina Mohseni
Haotao Wang
Zhiding Yu
Chaowei Xiao
Zhangyang Wang
J. Yadawa
95
32
0
09 Jun 2021
On the Lack of Robust Interpretability of Neural Text Classifiers
On the Lack of Robust Interpretability of Neural Text Classifiers
Muhammad Bilal Zafar
Michele Donini
Dylan Slack
Cédric Archambeau
Sanjiv Ranjan Das
K. Kenthapadi
AAML
70
21
0
08 Jun 2021
Accurate Shapley Values for explaining tree-based models
Accurate Shapley Values for explaining tree-based models
Salim I. Amoukou
Nicolas Brunel
Tangi Salaun
TDIFAtt
75
15
0
07 Jun 2021
How Did This Get Funded?! Automatically Identifying Quirky Scientific
  Achievements
How Did This Get Funded?! Automatically Identifying Quirky Scientific Achievements
Chen Shani
Nadav Borenstein
Dafna Shahaf
55
4
0
06 Jun 2021
Causal Abstractions of Neural Networks
Causal Abstractions of Neural Networks
Atticus Geiger
Hanson Lu
Thomas Icard
Christopher Potts
NAICML
80
246
0
06 Jun 2021
Energy-Based Learning for Cooperative Games, with Applications to
  Valuation Problems in Machine Learning
Energy-Based Learning for Cooperative Games, with Applications to Valuation Problems in Machine Learning
Yatao Bian
Yu Rong
Tingyang Xu
Jiaxiang Wu
Andreas Krause
Junzhou Huang
126
16
0
05 Jun 2021
Counterfactual Explanations Can Be Manipulated
Counterfactual Explanations Can Be Manipulated
Dylan Slack
Sophie Hilgard
Himabindu Lakkaraju
Sameer Singh
94
138
0
04 Jun 2021
Consensus Multiplicative Weights Update: Learning to Learn using
  Projector-based Game Signatures
Consensus Multiplicative Weights Update: Learning to Learn using Projector-based Game Signatures
N. Vadori
Rahul Savani
Thomas Spooner
Sumitra Ganesh
83
4
0
04 Jun 2021
Towards an Explanation Space to Align Humans and Explainable-AI Teamwork
Towards an Explanation Space to Align Humans and Explainable-AI Teamwork
G. Cabour
A. Morales
É. Ledoux
S. Bassetto
56
5
0
02 Jun 2021
On Efficiently Explaining Graph-Based Classifiers
On Efficiently Explaining Graph-Based Classifiers
Xuanxiang Huang
Yacine Izza
Alexey Ignatiev
Sasha Rubin
FAtt
118
39
0
02 Jun 2021
Towards Robust Classification Model by Counterfactual and Invariant Data
  Generation
Towards Robust Classification Model by Counterfactual and Invariant Data Generation
C. Chang
George Adam
Anna Goldenberg
OODCML
73
32
0
02 Jun 2021
When and Why does a Model Fail? A Human-in-the-loop Error Detection
  Framework for Sentiment Analysis
When and Why does a Model Fail? A Human-in-the-loop Error Detection Framework for Sentiment Analysis
Zhe Liu
Yufan Guo
J. Mahmud
88
10
0
02 Jun 2021
The Out-of-Distribution Problem in Explainability and Search Methods for
  Feature Importance Explanations
The Out-of-Distribution Problem in Explainability and Search Methods for Feature Importance Explanations
Peter Hase
Harry Xie
Joey Tianyi Zhou
OODDLRMFAtt
128
91
0
01 Jun 2021
Information Theoretic Measures for Fairness-aware Feature Selection
Information Theoretic Measures for Fairness-aware Feature Selection
S. Khodadadian
M. Nafea
AmirEmad Ghassami
Negar Kiyavash
63
8
0
01 Jun 2021
Efficient Explanations With Relevant Sets
Efficient Explanations With Relevant Sets
Yacine Izza
Alexey Ignatiev
Nina Narodytska
Martin C. Cooper
Sasha Rubin
FAtt
94
16
0
01 Jun 2021
To trust or not to trust an explanation: using LEAF to evaluate local
  linear XAI methods
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
E. Amparore
Alan Perotti
P. Bajardi
FAtt
88
68
0
01 Jun 2021
Explanations for Monotonic Classifiers
Explanations for Monotonic Classifiers
Sasha Rubin
Thomas Gerspacher
M. Cooper
Alexey Ignatiev
Nina Narodytska
FAtt
101
46
0
01 Jun 2021
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
Asma Ghandeharioun
Been Kim
Chun-Liang Li
Brendan Jou
B. Eoff
Rosalind W. Picard
AAML
102
54
0
31 May 2021
SHAQ: Incorporating Shapley Value Theory into Multi-Agent Q-Learning
SHAQ: Incorporating Shapley Value Theory into Multi-Agent Q-Learning
Jianhong Wang
Yuan Zhang
Yunjie Gu
Tae-Kyun Kim
OffRLFAtt
84
23
0
31 May 2021
The effectiveness of feature attribution methods and its correlation
  with automatic evaluation scores
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
Giang Nguyen
Daeyoung Kim
Anh Totti Nguyen
FAtt
144
89
0
31 May 2021
Bounded logit attention: Learning to explain image classifiers
Bounded logit attention: Learning to explain image classifiers
Thomas Baumhauer
D. Slijepcevic
Matthias Zeppelzauer
FAtt
66
2
0
31 May 2021
An exact counterfactual-example-based approach to tree-ensemble models
  interpretability
An exact counterfactual-example-based approach to tree-ensemble models interpretability
P. Blanchart
60
4
0
31 May 2021
Attention Flows are Shapley Value Explanations
Attention Flows are Shapley Value Explanations
Kawin Ethayarajh
Dan Jurafsky
FAttTDI
86
35
0
31 May 2021
A General Taylor Framework for Unifying and Revisiting Attribution Methods
Huiqi Deng
Na Zou
Mengnan Du
Weifu Chen
Guo-Can Feng
Helen Zhou
TDIFAtt
81
2
0
28 May 2021
Do not explain without context: addressing the blind spot of model
  explanations
Do not explain without context: addressing the blind spot of model explanations
Katarzyna Wo'znica
Katarzyna Pkekala
Hubert Baniecki
Wojciech Kretowicz
El.zbieta Sienkiewicz
P. Biecek
66
1
0
28 May 2021
XOmiVAE: an interpretable deep learning model for cancer classification
  using high-dimensional omics data
XOmiVAE: an interpretable deep learning model for cancer classification using high-dimensional omics data
Eloise Withnell
Xiaoyu Zhang
Kai Sun
Yike Guo
78
67
0
26 May 2021
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be
  Secretly Coded into the Classifiers' Outputs
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs
Mohammad Malekzadeh
Anastasia Borovykh
Deniz Gündüz
MIACV
87
42
0
25 May 2021
SHAFF: Fast and consistent SHApley eFfect estimates via random Forests
SHAFF: Fast and consistent SHApley eFfect estimates via random Forests
Clément Bénard
Gérard Biau
Sébastien Da Veiga
Erwan Scornet
FAtt
100
32
0
25 May 2021
Argumentative XAI: A Survey
Argumentative XAI: A Survey
Kristijonas vCyras
Antonio Rago
Emanuele Albini
P. Baroni
Francesca Toni
76
144
0
24 May 2021
On Explaining Random Forests with SAT
On Explaining Random Forests with SAT
Yacine Izza
Sasha Rubin
FAtt
123
75
0
21 May 2021
Explainable Machine Learning with Prior Knowledge: An Overview
Explainable Machine Learning with Prior Knowledge: An Overview
Katharina Beckh
Sebastian Müller
Matthias Jakobs
Vanessa Toborek
Hanxiao Tan
Raphael Fischer
Pascal Welke
Sebastian Houben
Laura von Rueden
XAI
89
28
0
21 May 2021
Probabilistic Sufficient Explanations
Probabilistic Sufficient Explanations
Eric Wang
Pasha Khosravi
Guy Van den Broeck
XAIFAttTPM
176
25
0
21 May 2021
Explainable Activity Recognition for Smart Home Systems
Explainable Activity Recognition for Smart Home Systems
Devleena Das
Yasutaka Nishimura
R. Vivek
Naoto Takeda
Sean T. Fish
Thomas Ploetz
Sonia Chernova
47
43
0
20 May 2021
Evaluating the Correctness of Explainable AI Algorithms for
  Classification
Evaluating the Correctness of Explainable AI Algorithms for Classification
Orcun Yalcin
Xiuyi Fan
Siyuan Liu
XAIFAtt
53
15
0
20 May 2021
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
Thorben Funke
Megha Khosla
Mandeep Rathee
Avishek Anand
FAtt
105
41
0
18 May 2021
Algorithm-Agnostic Explainability for Unsupervised Clustering
Algorithm-Agnostic Explainability for Unsupervised Clustering
Charles A. Ellis
M. Sendi
Eloy P. T. Geenjaar
Sergey Plis
Robyn L. Miller
Vince D. Calhoun
88
25
0
17 May 2021
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Fine-grained Interpretation and Causation Analysis in Deep NLP Models
Hassan Sajjad
Narine Kokhlikyan
Fahim Dalvi
Nadir Durrani
MILM
85
8
0
17 May 2021
CNN-based Approaches For Cross-Subject Classification in Motor Imagery:
  From The State-of-The-Art to DynamicNet
CNN-based Approaches For Cross-Subject Classification in Motor Imagery: From The State-of-The-Art to DynamicNet
Alberto Zancanaro
Giulia Cisotto
J. Paulo
G. Pires
U. J. Nunes
76
27
0
17 May 2021
A Review on Explainability in Multimodal Deep Neural Nets
A Review on Explainability in Multimodal Deep Neural Nets
Gargi Joshi
Rahee Walambe
K. Kotecha
138
142
0
17 May 2021
Abstraction, Validation, and Generalization for Explainable Artificial
  Intelligence
Abstraction, Validation, and Generalization for Explainable Artificial Intelligence
Scott Cheng-Hsin Yang
Tomas Folke
Patrick Shafto
74
5
0
16 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
155
198
0
15 May 2021
Cohort Shapley value for algorithmic fairness
Cohort Shapley value for algorithmic fairness
Masayoshi Mase
Art B. Owen
Benjamin B. Seiler
112
14
0
15 May 2021
Information-theoretic Evolution of Model Agnostic Global Explanations
Information-theoretic Evolution of Model Agnostic Global Explanations
Sukriti Verma
Nikaash Puri
Piyush B. Gupta
Balaji Krishnamurthy
FAtt
64
0
0
14 May 2021
Quantified Sleep: Machine learning techniques for observational n-of-1
  studies
Quantified Sleep: Machine learning techniques for observational n-of-1 studies
G. Truda
AI4TS
19
2
0
14 May 2021
Previous
123...626364...777879
Next