ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.08063
  4. Cited By
Estimation of Concept Explanations Should be Uncertainty Aware
v1v2 (latest)

Estimation of Concept Explanations Should be Uncertainty Aware

13 December 2023
Vihari Piratla
Juyeon Heo
Katherine M. Collins
Sukriti Singh
Adrian Weller
ArXiv (abs)PDFHTMLGithub

Papers citing "Estimation of Concept Explanations Should be Uncertainty Aware"

12 / 12 papers shown
Title
Text-To-Concept (and Back) via Cross-Model Alignment
Text-To-Concept (and Back) via Cross-Model Alignment
Mazda Moayeri
Keivan Rezaei
Maziar Sanjabi
Soheil Feizi
CLIP
64
44
0
10 May 2023
Label-Free Concept Bottleneck Models
Label-Free Concept Bottleneck Models
Tuomas P. Oikarinen
Subhro Das
Lam M. Nguyen
Tsui-Wei Weng
88
180
0
12 Apr 2023
GlanceNets: Interpretabile, Leak-proof Concept-based Models
GlanceNets: Interpretabile, Leak-proof Concept-based Models
Emanuele Marconato
Andrea Passerini
Stefano Teso
163
69
0
31 May 2022
Salient ImageNet: How to discover spurious features in Deep Learning?
Salient ImageNet: How to discover spurious features in Deep Learning?
Sahil Singla
Soheil Feizi
AAMLVLM
86
120
0
08 Oct 2021
Concept Bottleneck Models
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
101
835
0
09 Jul 2020
Pyro: Deep Universal Probabilistic Programming
Pyro: Deep Universal Probabilistic Programming
Eli Bingham
Jonathan P. Chen
M. Jankowiak
F. Obermeyer
Neeraj Pradhan
Theofanis Karaletsos
Rohit Singh
Paul A. Szerlip
Paul Horsfall
Noah D. Goodman
BDLGP
158
1,057
0
18 Oct 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
242
1,849
0
30 Nov 2017
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations
Network Dissection: Quantifying Interpretability of Deep Visual Representations
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILMFAtt
158
1,526
1
19 Apr 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
219
2,910
0
14 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
335
20,110
0
07 Oct 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
17,071
0
16 Feb 2016
Detect What You Can: Detecting and Representing Objects using Holistic
  Models and Body Parts
Detect What You Can: Detecting and Representing Objects using Holistic Models and Body Parts
Xianjie Chen
Roozbeh Mottaghi
Xiaobai Liu
Sanja Fidler
R. Urtasun
Alan Yuille
103
643
0
08 Jun 2014
1