ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1910.13413
  4. Cited By
Feature relevance quantification in explainable AI: A causal problem
v1v2 (latest)

Feature relevance quantification in explainable AI: A causal problem

29 October 2019
Dominik Janzing
Lenon Minorics
Patrick Blobaum
    FAttCML
ArXiv (abs)PDFHTML

Papers citing "Feature relevance quantification in explainable AI: A causal problem"

21 / 21 papers shown
Title
Computing Exact Shapley Values in Polynomial Time for Product-Kernel Methods
Computing Exact Shapley Values in Polynomial Time for Product-Kernel Methods
Majid Mohammadi
Siu Lun Chau
Krikamol Muandet
FAtt
189
0
0
22 May 2025
Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning
Explaining the Behavior of Black-Box Prediction Algorithms with Causal Learning
Numair Sani
Daniel Malinsky
I. Shpitser
CML
130
16
0
10 Jan 2025
Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory
Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory
Fabian Fumagalli
Maximilian Muschalik
Eyke Hüllermeier
Barbara Hammer
J. Herbinger
FAtt
189
4
0
22 Dec 2024
Unlearning-based Neural Interpretations
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
233
0
0
10 Oct 2024
Provably Accurate Shapley Value Estimation via Leverage Score Sampling
Provably Accurate Shapley Value Estimation via Leverage Score Sampling
Christopher Musco
R. Teal Witter
FAttFedMLTDI
88
5
0
02 Oct 2024
On the Tractability of SHAP Explanations
On the Tractability of SHAP Explanations
Guy Van den Broeck
A. Lykov
Maximilian Schleich
Dan Suciu
FAttTDI
66
276
0
18 Sep 2020
Explainable AI for a No-Teardown Vehicle Component Cost Estimation: A
  Top-Down Approach
Explainable AI for a No-Teardown Vehicle Component Cost Estimation: A Top-Down Approach
A. Moawad
E. Islam
Namdoo Kim
R. Vijayagopal
A. Rousseau
Wei Biao Wu
63
5
0
15 Jun 2020
The many Shapley values for model explanation
The many Shapley values for model explanation
Mukund Sundararajan
A. Najmi
TDIFAtt
62
635
0
22 Aug 2019
Explaining individual predictions when features are dependent: More
  accurate approximations to Shapley values
Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
K. Aas
Martin Jullum
Anders Løland
FAttTDI
65
624
0
25 Mar 2019
Neural Network Attributions: A Causal Perspective
Neural Network Attributions: A Causal Perspective
Aditya Chattopadhyay
Piyushi Manupriya
Anirban Sarkar
V. Balasubramanian
CML
55
146
0
06 Feb 2019
Consistent Individualized Feature Attribution for Tree Ensembles
Consistent Individualized Feature Attribution for Tree Ensembles
Scott M. Lundberg
G. Erion
Su-In Lee
FAttTDI
66
1,405
0
12 Feb 2018
Adversarial Patch
Adversarial Patch
Tom B. Brown
Dandelion Mané
Aurko Roy
Martín Abadi
Justin Gilmer
AAML
88
1,097
0
27 Dec 2017
Avoiding Discrimination through Causal Reasoning
Avoiding Discrimination through Causal Reasoning
Niki Kilbertus
Mateo Rojas-Carulla
Giambattista Parascandolo
Moritz Hardt
Dominik Janzing
Bernhard Schölkopf
FaMLCML
115
584
0
08 Jun 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
22,002
0
22 May 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
203
3,881
0
10 Apr 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
193
6,018
0
04 Mar 2017
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILMAAML
545
5,909
0
08 Jul 2016
Not Just a Black Box: Learning Important Features Through Propagating
  Activation Differences
Not Just a Black Box: Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Shcherbina
A. Kundaje
FAtt
85
791
0
05 May 2016
Layer-wise Relevance Propagation for Neural Networks with Local
  Renormalization Layers
Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
Wojciech Samek
FAtt
77
462
0
04 Apr 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,027
0
16 Feb 2016
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
282
19,121
0
20 Dec 2014
1