ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.14797
  4. Cited By
Latent SHAP: Toward Practical Human-Interpretable Explanations

Latent SHAP: Toward Practical Human-Interpretable Explanations

27 November 2022
Ron Bitton
Alon Malach
Amiel Meiseles
Satoru Momiyama
Toshinori Araki
Jun Furukawa
Yuval Elovici
A. Shabtai
    FAtt
ArXivPDFHTML

Papers citing "Latent SHAP: Toward Practical Human-Interpretable Explanations"

4 / 4 papers shown
Title
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
183
1,834
0
30 Nov 2017
Counterfactual Explanations without Opening the Black Box: Automated
  Decisions and the GDPR
Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR
Sandra Wachter
Brent Mittelstadt
Chris Russell
MLAU
94
2,346
0
01 Nov 2017
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
852
16,891
0
16 Feb 2016
Delving Deep into Rectifiers: Surpassing Human-Level Performance on
  ImageNet Classification
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
Kaiming He
Xinming Zhang
Shaoqing Ren
Jian Sun
VLM
268
18,583
0
06 Feb 2015
1