ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2206.01254
  4. Cited By
Which Explanation Should I Choose? A Function Approximation Perspective
  to Characterizing Post Hoc Explanations

Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations

2 June 2022
Tessa Han
Suraj Srinivas
Himabindu Lakkaraju
    FAtt
ArXivPDFHTML

Papers citing "Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post Hoc Explanations"

21 / 21 papers shown
Title
Axiomatic Explainer Globalness via Optimal Transport
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
102
1
0
13 Mar 2025
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Building Bridges, Not Walls -- Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
Shichang Zhang
Tessa Han
Usha Bhalla
Hima Lakkaraju
FAtt
150
0
0
17 Feb 2025
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Feature Importance Depends on Properties of the Data: Towards Choosing the Correct Explanations for Your Data and Decision Trees based Models
Célia Wafa Ayad
Thomas Bonnier
Benjamin Bosch
Sonali Parbhoo
Jesse Read
FAtt
XAI
103
0
0
11 Feb 2025
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Attention Mechanisms Don't Learn Additive Models: Rethinking Feature Importance for Transformers
Tobias Leemann
Alina Fastowski
Felix Pfeiffer
Gjergji Kasneci
59
4
0
10 Jan 2025
A Tale of Two Imperatives: Privacy and Explainability
A Tale of Two Imperatives: Privacy and Explainability
Supriya Manna
Niladri Sett
100
0
0
30 Dec 2024
Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory
Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory
Fabian Fumagalli
Maximilian Muschalik
Eyke Hüllermeier
Barbara Hammer
J. Herbinger
FAtt
39
1
0
22 Dec 2024
Need of AI in Modern Education: in the Eyes of Explainable AI (xAI)
Need of AI in Modern Education: in the Eyes of Explainable AI (xAI)
Supriya Manna
Dionis Barcari
42
3
0
31 Jul 2024
Amazing Things Come From Having Many Good Models
Amazing Things Come From Having Many Good Models
Cynthia Rudin
Chudi Zhong
Lesia Semenova
Margo Seltzer
Ronald E. Parr
Jiachang Liu
Srikar Katta
Jon Donnelly
Harry Chen
Zachery Boner
28
23
0
05 Jul 2024
MOUNTAINEER: Topology-Driven Visual Analytics for Comparing Local
  Explanations
MOUNTAINEER: Topology-Driven Visual Analytics for Comparing Local Explanations
Parikshit Solunke
Vitória Guardieiro
Joao Rulff
Peter Xenopoulos
G. Chan
Brian Barr
L. G. Nonato
Claudio Silva
35
1
0
21 Jun 2024
A Fresh Look at Sanity Checks for Saliency Maps
A Fresh Look at Sanity Checks for Saliency Maps
Anna Hedström
Leander Weber
Sebastian Lapuschkin
Marina M.-C. Höhne
FAtt
LRM
42
5
0
03 May 2024
The Duet of Representations and How Explanations Exacerbate It
The Duet of Representations and How Explanations Exacerbate It
Charles Wan
Rodrigo Belo
Leid Zejnilovic
Susana Lavado
CML
FAtt
19
1
0
13 Feb 2024
Is Ignorance Bliss? The Role of Post Hoc Explanation Faithfulness and
  Alignment in Model Trust in Laypeople and Domain Experts
Is Ignorance Bliss? The Role of Post Hoc Explanation Faithfulness and Alignment in Model Trust in Laypeople and Domain Experts
Tessa Han
Yasha Ektefaie
Maha Farhat
Marinka Zitnik
Himabindu Lakkaraju
FAtt
11
2
0
09 Dec 2023
A novel post-hoc explanation comparison metric and applications
A novel post-hoc explanation comparison metric and applications
Shreyan Mitra
Leilani H. Gilpin
FAtt
31
0
0
17 Nov 2023
Situated Natural Language Explanations
Situated Natural Language Explanations
Zining Zhu
Hao Jiang
Jingfeng Yang
Sreyashi Nag
Chao Zhang
Jie Huang
Yifan Gao
Frank Rudzicz
Bing Yin
LRM
44
1
0
27 Aug 2023
Discriminative Feature Attributions: Bridging Post Hoc Explainability
  and Inherent Interpretability
Discriminative Feature Attributions: Bridging Post Hoc Explainability and Inherent Interpretability
Usha Bhalla
Suraj Srinivas
Himabindu Lakkaraju
FAtt
CML
29
6
0
27 Jul 2023
Which Models have Perceptually-Aligned Gradients? An Explanation via
  Off-Manifold Robustness
Which Models have Perceptually-Aligned Gradients? An Explanation via Off-Manifold Robustness
Suraj Srinivas
Sebastian Bordt
Hima Lakkaraju
AAML
30
11
0
30 May 2023
UFO: A unified method for controlling Understandability and Faithfulness
  Objectives in concept-based explanations for CNNs
UFO: A unified method for controlling Understandability and Faithfulness Objectives in concept-based explanations for CNNs
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
32
0
0
27 Mar 2023
Impossibility Theorems for Feature Attribution
Impossibility Theorems for Feature Attribution
Blair Bilodeau
Natasha Jaques
Pang Wei Koh
Been Kim
FAtt
20
68
0
22 Dec 2022
Tensions Between the Proxies of Human Values in AI
Tensions Between the Proxies of Human Values in AI
Teresa Datta
D. Nissani
Max Cembalest
Akash Khanna
Haley Massa
John P. Dickerson
34
2
0
14 Dec 2022
Fairness via Explanation Quality: Evaluating Disparities in the Quality
  of Post hoc Explanations
Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
Jessica Dai
Sohini Upadhyay
Ulrich Aïvodji
Stephen H. Bach
Himabindu Lakkaraju
40
56
0
15 May 2022
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
177
186
0
03 Feb 2022
1