ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1808.02610
  4. Cited By
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured
  Data

L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data

8 August 2018
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
    FAtt
    TDI
ArXivPDFHTML

Papers citing "L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data"

50 / 115 papers shown
Title
Prediction via Shapley Value Regression
Prediction via Shapley Value Regression
Amr Alkhatib
Roman Bresson
Henrik Bostrom
Michalis Vazirgiannis
TDI
FAtt
69
0
0
07 May 2025
A Meaningful Perturbation Metric for Evaluating Explainability Methods
A Meaningful Perturbation Metric for Evaluating Explainability Methods
Danielle Cohen
Hila Chefer
Lior Wolf
AAML
30
0
0
09 Apr 2025
Explainable post-training bias mitigation with distribution-based fairness metrics
Explainable post-training bias mitigation with distribution-based fairness metrics
Ryan Franks
A. Miroshnikov
37
0
0
01 Apr 2025
FW-Shapley: Real-time Estimation of Weighted Shapley Values
Pranoy Panda
Siddharth Tandon
V. Balasubramanian
TDI
70
0
0
09 Mar 2025
Suboptimal Shapley Value Explanations
Suboptimal Shapley Value Explanations
Xiaolei Lu
FAtt
72
0
0
17 Feb 2025
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Shichang Zhang
Tessa Han
Usha Bhalla
Hima Lakkaraju
FAtt
160
0
0
17 Feb 2025
MBExplainer: Multilevel bandit-based explanations for downstream models
  with augmented graph embeddings
MBExplainer: Multilevel bandit-based explanations for downstream models with augmented graph embeddings
Ashkan Golgoon
Ryan Franks
Khashayar Filom
Arjun Ravi Kannan
43
0
0
01 Nov 2024
Improving the Weighting Strategy in KernelSHAP
Improving the Weighting Strategy in KernelSHAP
Lars Henry Berge Olsen
Martin Jullum
TDI
FAtt
77
2
0
07 Oct 2024
Provably Accurate Shapley Value Estimation via Leverage Score Sampling
Provably Accurate Shapley Value Estimation via Leverage Score Sampling
Christopher Musco
R. Teal Witter
FAtt
FedML
TDI
57
2
0
02 Oct 2024
Sufficient and Necessary Explanations (and What Lies in Between)
Sufficient and Necessary Explanations (and What Lies in Between)
Beepul Bharti
Paul H. Yi
Jeremias Sulam
XAI
FAtt
40
2
0
30 Sep 2024
Local Explanations and Self-Explanations for Assessing Faithfulness in
  black-box LLMs
Local Explanations and Self-Explanations for Assessing Faithfulness in black-box LLMs
Christos Fragkathoulas
Odysseas S. Chlapanis
LRM
25
0
0
18 Sep 2024
Feature Inference Attack on Shapley Values
Feature Inference Attack on Shapley Values
Xinjian Luo
Yangfan Jiang
X. Xiao
AAML
FAtt
48
19
0
16 Jul 2024
TokenSHAP: Interpreting Large Language Models with Monte Carlo Shapley
  Value Estimation
TokenSHAP: Interpreting Large Language Models with Monte Carlo Shapley Value Estimation
Roni Goldshmidt
Miriam Horovicz
LLMAG
29
7
0
14 Jul 2024
IG2: Integrated Gradient on Iterative Gradient Path for Feature
  Attribution
IG2: Integrated Gradient on Iterative Gradient Path for Feature Attribution
Yue Zhuo
Zhiqiang Ge
31
7
0
16 Jun 2024
WeShap: Weak Supervision Source Evaluation with Shapley Values
WeShap: Weak Supervision Source Evaluation with Shapley Values
Naiqing Guan
Nick Koudas
65
0
0
16 Jun 2024
Attri-Net: A Globally and Locally Inherently Interpretable Model for
  Multi-Label Classification Using Class-Specific Counterfactuals
Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific Counterfactuals
Susu Sun
S. Woerner
Andreas Maier
Lisa M. Koch
Christian F. Baumgartner
FAtt
29
1
0
08 Jun 2024
I Bet You Did Not Mean That: Testing Semantic Importance via Betting
I Bet You Did Not Mean That: Testing Semantic Importance via Betting
Jacopo Teneggi
Jeremias Sulam
FAtt
43
1
0
29 May 2024
Multi-Level Explanations for Generative Language Models
Multi-Level Explanations for Generative Language Models
Lucas Monteiro Paes
Dennis L. Wei
Hyo Jin Do
Hendrik Strobelt
Ronny Luss
...
Manish Nagireddy
Karthikeyan N. Ramamurthy
P. Sattigeri
Werner Geyer
Soumya Ghosh
FAtt
62
8
0
21 Mar 2024
Explaining Probabilistic Models with Distributional Values
Explaining Probabilistic Models with Distributional Values
Luca Franceschi
Michele Donini
Cédric Archambeau
Matthias Seeger
FAtt
42
2
0
15 Feb 2024
EcoVal: An Efficient Data Valuation Framework for Machine Learning
EcoVal: An Efficient Data Valuation Framework for Machine Learning
Ayush K Tarun
Vikram S Chundawat
Murari Mandal
Hong Ming Tan
Bowei Chen
Mohan Kankanhalli
TDI
38
1
0
14 Feb 2024
SyntaxShap: Syntax-aware Explainability Method for Text Generation
SyntaxShap: Syntax-aware Explainability Method for Text Generation
Kenza Amara
Rita Sevastjanova
Mennatallah El-Assady
46
2
0
14 Feb 2024
Shapley Values-enabled Progressive Pseudo Bag Augmentation for Whole
  Slide Image Classification
Shapley Values-enabled Progressive Pseudo Bag Augmentation for Whole Slide Image Classification
Renao Yan
Qiehe Sun
Cheng Jin
Yiqing Liu
Yonghong He
Tian Guan
Hao Chen
56
9
0
09 Dec 2023
TextGenSHAP: Scalable Post-hoc Explanations in Text Generation with Long
  Documents
TextGenSHAP: Scalable Post-hoc Explanations in Text Generation with Long Documents
James Enouen
Hootan Nakhost
Sayna Ebrahimi
Sercan O. Arik
Yan Liu
Tomas Pfister
43
5
0
03 Dec 2023
Improving Interpretation Faithfulness for Vision Transformers
Improving Interpretation Faithfulness for Vision Transformers
Lijie Hu
Yixin Liu
Ninghao Liu
Mengdi Huai
Lichao Sun
Di Wang
46
5
0
29 Nov 2023
Greedy PIG: Adaptive Integrated Gradients
Greedy PIG: Adaptive Integrated Gradients
Kyriakos Axiotis
Sami Abu-El-Haija
Lin Chen
Matthew Fahrbach
Gang Fu
FAtt
28
0
0
10 Nov 2023
Fast Shapley Value Estimation: A Unified Approach
Fast Shapley Value Estimation: A Unified Approach
Borui Zhang
Baotong Tian
Wenzhao Zheng
Jie Zhou
Jiwen Lu
TDI
FAtt
32
0
0
02 Nov 2023
The Thousand Faces of Explainable AI Along the Machine Learning Life
  Cycle: Industrial Reality and Current State of Research
The Thousand Faces of Explainable AI Along the Machine Learning Life Cycle: Industrial Reality and Current State of Research
Thomas Decker
Ralf Gross
Alexander Koebler
Michael Lebacher
Ronald Schnitzer
Stefan H. Weber
36
2
0
11 Oct 2023
Stabilizing Estimates of Shapley Values with Control Variates
Stabilizing Estimates of Shapley Values with Control Variates
Jeremy Goldwasser
Giles Hooker
FAtt
35
5
0
11 Oct 2023
Fair Feature Importance Scores for Interpreting Tree-Based Methods and
  Surrogates
Fair Feature Importance Scores for Interpreting Tree-Based Methods and Surrogates
Camille Olivia Little
Debolina Halder Lina
Genevera I. Allen
31
1
0
06 Oct 2023
Refutation of Shapley Values for XAI -- Additional Evidence
Refutation of Shapley Values for XAI -- Additional Evidence
Xuanxiang Huang
Sasha Rubin
AAML
37
4
0
30 Sep 2023
A Refutation of Shapley Values for Explainability
A Refutation of Shapley Values for Explainability
Xuanxiang Huang
Sasha Rubin
FAtt
29
3
0
06 Sep 2023
Explainability is NOT a Game
Explainability is NOT a Game
Sasha Rubin
Xuanxiang Huang
31
17
0
27 Jun 2023
PWSHAP: A Path-Wise Explanation Model for Targeted Variables
PWSHAP: A Path-Wise Explanation Model for Targeted Variables
Lucile Ter-Minassian
Oscar Clivio
Karla Diaz-Ordaz
R. Evans
Chris Holmes
31
1
0
26 Jun 2023
Explaining Predictive Uncertainty with Information Theoretic Shapley
  Values
Explaining Predictive Uncertainty with Information Theoretic Shapley Values
David S. Watson
Joshua O'Hara
Niek Tax
Richard Mudd
Ido Guy
TDI
FAtt
39
22
0
09 Jun 2023
DU-Shapley: A Shapley Value Proxy for Efficient Dataset Valuation
DU-Shapley: A Shapley Value Proxy for Efficient Dataset Valuation
Felipe Garrido-Lucero
Benjamin Heymann
Maxime Vono
P. Loiseau
Vianney Perchet
FedML
TDI
50
3
0
03 Jun 2023
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme
  Recognition
Can We Trust Explainable AI Methods on ASR? An Evaluation on Phoneme Recognition
Xiao-lan Wu
P. Bell
A. Rajan
29
5
0
29 May 2023
Unsupervised Selective Rationalization with Noise Injection
Unsupervised Selective Rationalization with Noise Injection
Adam Storek
Melanie Subbiah
Kathleen McKeown
31
2
0
27 May 2023
Disproving XAI Myths with Formal Methods -- Initial Results
Disproving XAI Myths with Formal Methods -- Initial Results
Sasha Rubin
47
8
0
13 May 2023
Explainability in AI Policies: A Critical Review of Communications,
  Reports, Regulations, and Standards in the EU, US, and UK
Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK
L. Nannini
Agathe Balayn
A. Smith
26
37
0
20 Apr 2023
ViT-Calibrator: Decision Stream Calibration for Vision Transformer
Lin Chen
Zhijie Jia
Tian Qiu
Lechao Cheng
Jie Lei
Zunlei Feng
Min-Gyoo Song
34
1
0
10 Apr 2023
HarsanyiNet: Computing Accurate Shapley Values in a Single Forward
  Propagation
HarsanyiNet: Computing Accurate Shapley Values in a Single Forward Propagation
Lu Chen
Siyu Lou
Keyan Zhang
Jin Huang
Quanshi Zhang
TDI
FAtt
34
9
0
04 Apr 2023
Beyond Demographic Parity: Redefining Equal Treatment
Beyond Demographic Parity: Redefining Equal Treatment
Carlos Mougan
Laura State
Antonio Ferrara
Salvatore Ruggieri
Steffen Staab
FaML
38
1
0
14 Mar 2023
The Inadequacy of Shapley Values for Explainability
The Inadequacy of Shapley Values for Explainability
Xuanxiang Huang
Sasha Rubin
FAtt
39
41
0
16 Feb 2023
Efficient XAI Techniques: A Taxonomic Survey
Efficient XAI Techniques: A Taxonomic Survey
Yu-Neng Chuang
Guanchu Wang
Fan Yang
Zirui Liu
Xuanting Cai
Mengnan Du
Xia Hu
26
33
0
07 Feb 2023
Provable Robust Saliency-based Explanations
Provable Robust Saliency-based Explanations
Chao Chen
Chenghua Guo
Guixiang Ma
Ming Zeng
Xi Zhang
Sihong Xie
AAML
FAtt
41
0
0
28 Dec 2022
Identifying the Source of Vulnerability in Explanation Discrepancy: A
  Case Study in Neural Text Classification
Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Ruixuan Tang
Hanjie Chen
Yangfeng Ji
AAML
FAtt
34
2
0
10 Dec 2022
A Rigorous Study Of The Deep Taylor Decomposition
A Rigorous Study Of The Deep Taylor Decomposition
Leon Sixt
Tim Landgraf
FAtt
AAML
27
4
0
14 Nov 2022
Trade-off Between Efficiency and Consistency for Removal-based
  Explanations
Trade-off Between Efficiency and Consistency for Removal-based Explanations
Yifan Zhang
Haowei He
Zhiyuan Tan
Yang Yuan
FAtt
46
3
0
31 Oct 2022
Interpretable Geometric Deep Learning via Learnable Randomness Injection
Interpretable Geometric Deep Learning via Learnable Randomness Injection
Siqi Miao
Yunan Luo
Miaoyuan Liu
Pan Li
31
25
0
30 Oct 2022
Generating Hierarchical Explanations on Text Classification Without
  Connecting Rules
Generating Hierarchical Explanations on Text Classification Without Connecting Rules
Yiming Ju
Yuanzhe Zhang
Kang Liu
Jun Zhao
FAtt
26
3
0
24 Oct 2022
123
Next