ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.05282
  4. Cited By
Learning to Estimate Shapley Values with Vision Transformers
v1v2v3 (latest)

Learning to Estimate Shapley Values with Vision Transformers

10 June 2022
Ian Covert
Chanwoo Kim
Su-In Lee
    FAtt
ArXiv (abs)PDFHTML

Papers citing "Learning to Estimate Shapley Values with Vision Transformers"

25 / 25 papers shown
Title
Reasoning Like an Economist: Post-Training on Economic Problems Induces Strategic Generalization in LLMs
Reasoning Like an Economist: Post-Training on Economic Problems Induces Strategic Generalization in LLMs
Yufa Zhou
S. Wang
Xingyu Dong
Xiangqi Jin
Yifang Chen
Yue Min
Kexin Yang
Xingzhang Ren
Dayiheng Liu
Linfeng Zhang
OffRLLRM
28
0
0
31 May 2025
FW-Shapley: Real-time Estimation of Weighted Shapley Values
Pranoy Panda
Siddharth Tandon
V. Balasubramanian
TDI
153
1
0
09 Mar 2025
Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Transformers
Gnothi Seauton: Empowering Faithful Self-Interpretability in Black-Box Transformers
Shaobo Wang
Hongxuan Tang
Mingyang Wang
Hao Zhang
Xuyang Liu
Weiya Li
Xuming Hu
Linfeng Zhang
46
0
0
29 Oct 2024
Locality Alignment Improves Vision-Language Models
Locality Alignment Improves Vision-Language Models
Ian Covert
Tony Sun
James Zou
Tatsunori Hashimoto
VLM
267
7
0
14 Oct 2024
Output Scouting: Auditing Large Language Models for Catastrophic Responses
Output Scouting: Auditing Large Language Models for Catastrophic Responses
Andrew Bell
Joao Fonseca
KELM
145
2
0
04 Oct 2024
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations
  for Vision Foundation Models
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations for Vision Foundation Models
Hengyi Wang
Shiwei Tan
Hao Wang
BDL
126
7
0
18 Jun 2024
Interpretability Needs a New Paradigm
Interpretability Needs a New Paradigm
Andreas Madsen
Himabindu Lakkaraju
Siva Reddy
Sarath Chandar
72
3
0
08 May 2024
Prospector Heads: Generalized Feature Attribution for Large Models &
  Data
Prospector Heads: Generalized Feature Attribution for Large Models & Data
Gautam Machiraju
Alexander Derry
Arjun D Desai
Neel Guha
Amir-Hossein Karimi
James Zou
Russ Altman
Christopher Ré
Parag Mallick
AI4TSMedIm
121
0
0
18 Feb 2024
Stochastic Amortization: A Unified Approach to Accelerate Feature and
  Data Attribution
Stochastic Amortization: A Unified Approach to Accelerate Feature and Data Attribution
Ian Covert
Chanwoo Kim
Su-In Lee
James Zou
Tatsunori Hashimoto
TDI
111
11
0
29 Jan 2024
Identifying Important Group of Pixels using Interactions
Identifying Important Group of Pixels using Interactions
Kosuke Sumiyasu
Kazuhiko Kawamoto
Hiroshi Kera
75
2
0
08 Jan 2024
TVE: Learning Meta-attribution for Transferable Vision Explainer
TVE: Learning Meta-attribution for Transferable Vision Explainer
Guanchu Wang
Yu-Neng Chuang
Fan Yang
Mengnan Du
Chia-Yuan Chang
...
Zirui Liu
Zhaozhuo Xu
Kaixiong Zhou
Xuanting Cai
Helen Zhou
111
1
0
23 Dec 2023
Explainability of Vision Transformers: A Comprehensive Review and New
  Perspectives
Explainability of Vision Transformers: A Comprehensive Review and New Perspectives
Rojina Kashefi
Leili Barekatain
Mohammad Sabokrou
Fatemeh Aghaeipoor
ViT
105
11
0
12 Nov 2023
The Thousand Faces of Explainable AI Along the Machine Learning Life
  Cycle: Industrial Reality and Current State of Research
The Thousand Faces of Explainable AI Along the Machine Learning Life Cycle: Industrial Reality and Current State of Research
Thomas Decker
Ralf Gross
Alexander Koebler
Michael Lebacher
Ronald Schnitzer
Stefan H. Weber
80
2
0
11 Oct 2023
SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification
  of Top-k Features
SHAP@k:Efficient and Probably Approximately Correct (PAC) Identification of Top-k Features
Sanjay Kariyappa
Leonidas Tsepenekas
Freddy Lecue
Daniele Magazzeni
FAtt
55
6
0
10 Jul 2023
On the Robustness of Removal-Based Feature Attributions
On the Robustness of Removal-Based Feature Attributions
Christy Lin
Ian Covert
Su-In Lee
122
13
0
12 Jun 2023
A Unified Concept-Based System for Local, Global, and Misclassification
  Explanations
A Unified Concept-Based System for Local, Global, and Misclassification Explanations
Fatemeh Aghaeipoor
D. Asgarian
Mohammad Sabokrou
FAtt
59
0
0
06 Jun 2023
CoRTX: Contrastive Framework for Real-time Explanation
CoRTX: Contrastive Framework for Real-time Explanation
Yu-Neng Chuang
Guanchu Wang
Fan Yang
Quan-Gen Zhou
Pushkar Tripathi
Xuanting Cai
Helen Zhou
91
20
0
05 Mar 2023
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable
  Estimators with MetaQuantus
The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus
Anna Hedström
P. Bommer
Kristoffer K. Wickstrom
Wojciech Samek
Sebastian Lapuschkin
Marina M.-C. Höhne
106
21
0
14 Feb 2023
Efficient XAI Techniques: A Taxonomic Survey
Efficient XAI Techniques: A Taxonomic Survey
Yu-Neng Chuang
Guanchu Wang
Fan Yang
Zirui Liu
Xuanting Cai
Mengnan Du
Helen Zhou
74
34
0
07 Feb 2023
Weakly Supervised Learning Significantly Reduces the Number of Labels
  Required for Intracranial Hemorrhage Detection on Head CT
Weakly Supervised Learning Significantly Reduces the Number of Labels Required for Intracranial Hemorrhage Detection on Head CT
Jacopo Teneggi
Paul H. Yi
Jeremias Sulam
79
4
0
29 Nov 2022
ViT-CX: Causal Explanation of Vision Transformers
ViT-CX: Causal Explanation of Vision Transformers
Weiyan Xie
Xiao-hui Li
Caleb Chen Cao
Nevin L.Zhang
ViT
111
20
0
06 Nov 2022
What does a platypus look like? Generating customized prompts for
  zero-shot image classification
What does a platypus look like? Generating customized prompts for zero-shot image classification
Sarah M Pratt
Ian Covert
Rosanne Liu
Ali Farhadi
VLM
189
224
0
07 Sep 2022
Algorithms to estimate Shapley value feature attributions
Algorithms to estimate Shapley value feature attributions
Hugh Chen
Ian Covert
Scott M. Lundberg
Su-In Lee
TDIFAtt
93
236
0
15 Jul 2022
SHAP-XRT: The Shapley Value Meets Conditional Independence Testing
SHAP-XRT: The Shapley Value Meets Conditional Independence Testing
Jacopo Teneggi
Beepul Bharti
Yaniv Romano
Jeremias Sulam
FAtt
90
5
0
14 Jul 2022
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.3K
17,211
0
16 Feb 2016
1