ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.11023
  4. Cited By
The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
  Sufficient Subsets
v1v2 (latest)

The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets

23 September 2020
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
    FAtt
ArXiv (abs)PDFHTML

Papers citing "The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal Sufficient Subsets"

11 / 11 papers shown
Title
Interpretability in Symbolic Regression: a benchmark of Explanatory
  Methods using the Feynman data set
Interpretability in Symbolic Regression: a benchmark of Explanatory Methods using the Feynman data set
Guilherme Seidyo Imai Aldeia
Fabrício Olivetti de França
94
10
0
08 Apr 2024
Faithfulness Tests for Natural Language Explanations
Faithfulness Tests for Natural Language Explanations
Pepa Atanasova
Oana-Maria Camburu
Christina Lioma
Thomas Lukasiewicz
J. Simonsen
Isabelle Augenstein
FAtt
120
67
0
29 May 2023
Leveraging Explanations in Interactive Machine Learning: An Overview
Leveraging Explanations in Interactive Machine Learning: An Overview
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
XAIFAttLRM
164
63
0
29 Jul 2022
Explainability's Gain is Optimality's Loss? -- How Explanations Bias
  Decision-making
Explainability's Gain is Optimality's Loss? -- How Explanations Bias Decision-making
Charley L. Wan
Rodrigo Belo
Leid Zejnilovic
FAttFaML
37
5
0
17 Jun 2022
"Will You Find These Shortcuts?" A Protocol for Evaluating the
  Faithfulness of Input Salience Methods for Text Classification
"Will You Find These Shortcuts?" A Protocol for Evaluating the Faithfulness of Input Salience Methods for Text Classification
Jasmijn Bastings
Sebastian Ebert
Polina Zablotskaia
Anders Sandholm
Katja Filippova
152
78
0
14 Nov 2021
Toward Learning Human-aligned Cross-domain Robust Models by Countering
  Misaligned Features
Toward Learning Human-aligned Cross-domain Robust Models by Countering Misaligned Features
Haohan Wang
Zeyi Huang
Hanlin Zhang
Yong Jae Lee
Eric P. Xing
OOD
200
16
0
05 Nov 2021
Knowledge-Grounded Self-Rationalization via Extractive and Natural
  Language Explanations
Knowledge-Grounded Self-Rationalization via Extractive and Natural Language Explanations
Bodhisattwa Prasad Majumder
Oana-Maria Camburu
Thomas Lukasiewicz
Julian McAuley
100
36
0
25 Jun 2021
Probabilistic Sufficient Explanations
Probabilistic Sufficient Explanations
Eric Wang
Pasha Khosravi
Guy Van den Broeck
XAIFAttTPM
174
25
0
21 May 2021
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks
Thorben Funke
Megha Khosla
Mandeep Rathee
Avishek Anand
FAtt
105
41
0
18 May 2021
e-ViL: A Dataset and Benchmark for Natural Language Explanations in
  Vision-Language Tasks
e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks
Maxime Kayser
Oana-Maria Camburu
Leonard Salewski
Cornelius Emde
Virginie Do
Zeynep Akata
Thomas Lukasiewicz
VLM
114
101
0
08 May 2021
To what extent do human explanations of model behavior align with actual
  model behavior?
To what extent do human explanations of model behavior align with actual model behavior?
Grusha Prasad
Yixin Nie
Joey Tianyi Zhou
Robin Jia
Douwe Kiela
Adina Williams
73
28
0
24 Dec 2020
1