ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2011.14878
  4. Cited By
Explaining by Removing: A Unified Framework for Model Explanation

Explaining by Removing: A Unified Framework for Model Explanation

21 November 2020
Ian Covert
Scott M. Lundberg
Su-In Lee
    FAtt
ArXivPDFHTML

Papers citing "Explaining by Removing: A Unified Framework for Model Explanation"

50 / 150 papers shown
Title
EMaP: Explainable AI with Manifold-based Perturbations
EMaP: Explainable AI with Manifold-based Perturbations
Minh Nhat Vu
Huy Mai
My T. Thai
AAML
35
2
0
18 Sep 2022
From Shapley Values to Generalized Additive Models and back
From Shapley Values to Generalized Additive Models and back
Sebastian Bordt
U. V. Luxburg
FAtt
TDI
69
34
0
08 Sep 2022
Incremental Permutation Feature Importance (iPFI): Towards Online
  Explanations on Data Streams
Incremental Permutation Feature Importance (iPFI): Towards Online Explanations on Data Streams
Fabian Fumagalli
Maximilian Muschalik
Eyke Hüllermeier
Barbara Hammer
24
20
0
05 Sep 2022
Diffusion-based Time Series Imputation and Forecasting with Structured
  State Space Models
Diffusion-based Time Series Imputation and Forecasting with Structured State Space Models
Juan Miguel Lopez Alcaraz
Nils Strodthoff
DiffM
28
167
0
19 Aug 2022
Algorithms to estimate Shapley value feature attributions
Algorithms to estimate Shapley value feature attributions
Hugh Chen
Ian Covert
Scott M. Lundberg
Su-In Lee
TDI
FAtt
23
212
0
15 Jul 2022
SHAP-XRT: The Shapley Value Meets Conditional Independence Testing
SHAP-XRT: The Shapley Value Meets Conditional Independence Testing
Jacopo Teneggi
Beepul Bharti
Yaniv Romano
Jeremias Sulam
FAtt
20
3
0
14 Jul 2022
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial
  Intelligence
BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial Intelligence
Isha Hameed
Samuel Sharpe
Daniel Barcklow
Justin Au-yeung
Sahil Verma
Jocelyn Huang
Brian Barr
C. B. Bruss
35
14
0
12 Jul 2022
An Additive Instance-Wise Approach to Multi-class Model Interpretation
An Additive Instance-Wise Approach to Multi-class Model Interpretation
Vy Vo
Van Nguyen
Trung Le
Quan Hung Tran
Gholamreza Haffari
S. Çamtepe
Dinh Q. Phung
FAtt
40
5
0
07 Jul 2022
Causality for Inherently Explainable Transformers: CAT-XPLAIN
Causality for Inherently Explainable Transformers: CAT-XPLAIN
Subash Khanal
Benjamin Brodie
Xin Xing
Ai-Ling Lin
Nathan Jacobs
17
4
0
29 Jun 2022
Analyzing Explainer Robustness via Probabilistic Lipschitzness of
  Prediction Functions
Analyzing Explainer Robustness via Probabilistic Lipschitzness of Prediction Functions
Zulqarnain Khan
Davin Hill
A. Masoomi
Joshua Bone
Jennifer Dy
AAML
36
3
0
24 Jun 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
26
140
0
22 Jun 2022
The Manifold Hypothesis for Gradient-Based Explanations
The Manifold Hypothesis for Gradient-Based Explanations
Sebastian Bordt
Uddeshya Upadhyay
Zeynep Akata
U. V. Luxburg
FAtt
AAML
18
12
0
15 Jun 2022
Learning to Estimate Shapley Values with Vision Transformers
Learning to Estimate Shapley Values with Vision Transformers
Ian Covert
Chanwoo Kim
Su-In Lee
FAtt
25
34
0
10 Jun 2022
Which Explanation Should I Choose? A Function Approximation Perspective
  to Characterizing Post Hoc Explanations
Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations
Tessa Han
Suraj Srinivas
Himabindu Lakkaraju
FAtt
30
87
0
02 Jun 2022
Order-sensitive Shapley Values for Evaluating Conceptual Soundness of
  NLP Models
Order-sensitive Shapley Values for Evaluating Conceptual Soundness of NLP Models
Kaiji Lu
Anupam Datta
13
0
0
01 Jun 2022
A Sea of Words: An In-Depth Analysis of Anchors for Text Data
A Sea of Words: An In-Depth Analysis of Anchors for Text Data
Gianluigi Lopardo
F. Precioso
Damien Garreau
19
6
0
27 May 2022
Explaining Preferences with Shapley Values
Explaining Preferences with Shapley Values
Robert Hu
Siu Lun Chau
Jaime Ferrando Huertas
Dino Sejdinovic
TDI
FAtt
11
6
0
26 May 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aivodji
Stephen H. Bach
Himabindu Lakkaraju
37
56
0
15 May 2022
Data Debugging with Shapley Importance over End-to-End Machine Learning
  Pipelines
Data Debugging with Shapley Importance over End-to-End Machine Learning Pipelines
Bojan Karlavs
David Dao
Matteo Interlandi
Bo-wen Li
Sebastian Schelter
Wentao Wu
Ce Zhang
TDI
11
26
0
23 Apr 2022
Ultra-marginal Feature Importance: Learning from Data with Causal
  Guarantees
Ultra-marginal Feature Importance: Learning from Data with Causal Guarantees
Joseph Janssen
Vincent Guan
Elina Robeva
24
7
0
21 Apr 2022
Missingness Bias in Model Debugging
Missingness Bias in Model Debugging
Saachi Jain
Hadi Salman
E. Wong
Pengchuan Zhang
Vibhav Vineet
Sai H. Vemprala
A. Madry
22
37
0
19 Apr 2022
From Modern CNNs to Vision Transformers: Assessing the Performance,
  Robustness, and Classification Strategies of Deep Learning Models in
  Histopathology
From Modern CNNs to Vision Transformers: Assessing the Performance, Robustness, and Classification Strategies of Deep Learning Models in Histopathology
Maximilian Springenberg
A. Frommholz
M. Wenzel
Eva Weicken
Jackie Ma
Nils Strodthoff
MedIm
25
42
0
11 Apr 2022
Contrastive language and vision learning of general fashion concepts
Contrastive language and vision learning of general fashion concepts
P. Chia
Giuseppe Attanasio
Federico Bianchi
Silvia Terragni
A. Magalhães
Diogo Gonçalves
C. Greco
Jacopo Tagliabue
CLIP
15
42
0
08 Apr 2022
Interactive Evolutionary Multi-Objective Optimization via
  Learning-to-Rank
Interactive Evolutionary Multi-Objective Optimization via Learning-to-Rank
Ke Li
Guiyu Lai
Xinghu Yao
16
11
0
06 Apr 2022
Don't Get Me Wrong: How to Apply Deep Visual Interpretations to Time
  Series
Don't Get Me Wrong: How to Apply Deep Visual Interpretations to Time Series
Christoffer Loeffler
Wei-Cheng Lai
Bjoern M. Eskofier
Dario Zanca
Lukas M. Schmidt
Christopher Mutschler
FAtt
AI4TS
33
5
0
14 Mar 2022
Sparse Subspace Clustering for Concept Discovery (SSCCD)
Sparse Subspace Clustering for Concept Discovery (SSCCD)
Johanna Vielhaben
Stefan Blücher
Nils Strodthoff
23
6
0
11 Mar 2022
A Consistent and Efficient Evaluation Strategy for Attribution Methods
A Consistent and Efficient Evaluation Strategy for Attribution Methods
Yao Rong
Tobias Leemann
V. Borisov
Gjergji Kasneci
Enkelejda Kasneci
FAtt
23
92
0
01 Feb 2022
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial
  Contexts
Post-Hoc Explanations Fail to Achieve their Purpose in Adversarial Contexts
Sebastian Bordt
Michèle Finck
Eric Raidl
U. V. Luxburg
AILaw
29
77
0
25 Jan 2022
Toward Explainable AI for Regression Models
Toward Explainable AI for Regression Models
S. Letzgus
Patrick Wagner
Jonas Lederer
Wojciech Samek
Klaus-Robert Muller
G. Montavon
XAI
28
63
0
21 Dec 2021
Using Shapley Values and Variational Autoencoders to Explain Predictive
  Models with Dependent Mixed Features
Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features
Lars Henry Berge Olsen
I. Glad
Martin Jullum
K. Aas
TDI
FAtt
21
17
0
26 Nov 2021
Using Color To Identify Insider Threats
Using Color To Identify Insider Threats
Sameer Tajdin Khanna
AAML
20
1
0
25 Nov 2021
Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for
  Machine Learning
Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning
Yongchan Kwon
James Y. Zou
TDI
30
122
0
26 Oct 2021
Inferring feature importance with uncertainties in high-dimensional data
Inferring feature importance with uncertainties in high-dimensional data
P. V. Johnsen
Inga Strümke
S. Riemer-Sørensen
A. Dewan
M. Langaas
TDI
FAtt
6
2
0
02 Sep 2021
Temporal Dependencies in Feature Importance for Time Series Predictions
Temporal Dependencies in Feature Importance for Time Series Predictions
Kin Kwan Leung
Clayton Rooke
Jonathan Smith
S. Zuberi
M. Volkovs
OOD
AI4TS
23
24
0
29 Jul 2021
FastSHAP: Real-Time Shapley Value Estimation
FastSHAP: Real-Time Shapley Value Estimation
N. Jethani
Mukund Sudarshan
Ian Covert
Su-In Lee
Rajesh Ranganath
TDI
FAtt
67
122
0
15 Jul 2021
An Imprecise SHAP as a Tool for Explaining the Class Probability
  Distributions under Limited Training Data
An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data
Lev V. Utkin
A. Konstantinov
Kirill Vishniakov
FAtt
21
5
0
16 Jun 2021
Accurate Shapley Values for explaining tree-based models
Accurate Shapley Values for explaining tree-based models
Salim I. Amoukou
Nicolas Brunel
Tangi Salaun
TDI
FAtt
14
13
0
07 Jun 2021
Energy-Based Learning for Cooperative Games, with Applications to
  Valuation Problems in Machine Learning
Energy-Based Learning for Cooperative Games, with Applications to Valuation Problems in Machine Learning
Yatao Bian
Yu Rong
Tingyang Xu
Jiaxiang Wu
Andreas Krause
Junzhou Huang
32
16
0
05 Jun 2021
Do not explain without context: addressing the blind spot of model
  explanations
Do not explain without context: addressing the blind spot of model explanations
Katarzyna Wo'znica
Katarzyna Pkekala
Hubert Baniecki
Wojciech Kretowicz
El.zbieta Sienkiewicz
P. Biecek
23
1
0
28 May 2021
Explaining a Series of Models by Propagating Shapley Values
Explaining a Series of Models by Propagating Shapley Values
Hugh Chen
Scott M. Lundberg
Su-In Lee
TDI
FAtt
22
122
0
30 Apr 2021
Sampling Permutations for Shapley Value Estimation
Sampling Permutations for Shapley Value Estimation
Rory Mitchell
Joshua N. Cooper
E. Frank
G. Holmes
14
113
0
25 Apr 2021
Fast Hierarchical Games for Image Explanations
Fast Hierarchical Games for Image Explanations
Jacopo Teneggi
Alexandre Luster
Jeremias Sulam
FAtt
31
17
0
13 Apr 2021
Ensembles of Random SHAPs
Ensembles of Random SHAPs
Lev V. Utkin
A. Konstantinov
FAtt
16
20
0
04 Mar 2021
Have We Learned to Explain?: How Interpretability Methods Can Learn to
  Encode Predictions in their Interpretations
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations
N. Jethani
Mukund Sudarshan
Yindalon Aphinyanagphongs
Rajesh Ranganath
FAtt
82
70
0
02 Mar 2021
PredDiff: Explanations and Interactions from Conditional Expectations
PredDiff: Explanations and Interactions from Conditional Expectations
Stefan Blücher
Johanna Vielhaben
Nils Strodthoff
FAtt
22
19
0
26 Feb 2021
Feature Importance Explanations for Temporal Black-Box Models
Feature Importance Explanations for Temporal Black-Box Models
Akshay Sood
M. Craven
FAtt
OOD
17
15
0
23 Feb 2021
Shapley values for feature selection: The good, the bad, and the axioms
Shapley values for feature selection: The good, the bad, and the axioms
D. Fryer
Inga Strümke
Hien Nguyen
FAtt
TDI
6
190
0
22 Feb 2021
Improving KernelSHAP: Practical Shapley Value Estimation via Linear
  Regression
Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression
Ian Covert
Su-In Lee
FAtt
10
161
0
02 Dec 2020
Feature Removal Is a Unifying Principle for Model Explanation Methods
Feature Removal Is a Unifying Principle for Model Explanation Methods
Ian Covert
Scott M. Lundberg
Su-In Lee
FAtt
31
33
0
06 Nov 2020
From Clustering to Cluster Explanations via Neural Networks
From Clustering to Cluster Explanations via Neural Networks
Jacob R. Kauffmann
Malte Esders
Lukas Ruff
G. Montavon
Wojciech Samek
K. Müller
16
68
0
18 Jun 2019
Previous
123