Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2207.09615
Cited By
Overlooked factors in concept-based explanations: Dataset choice, concept learnability, and human capability
20 July 2022
V. V. Ramaswamy
Sunnie S. Y. Kim
Ruth C. Fong
Olga Russakovsky
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Overlooked factors in concept-based explanations: Dataset choice, concept learnability, and human capability"
7 / 7 papers shown
Title
Explanation Bottleneck Models
Shinýa Yamaguchi
Kosuke Nishida
LRM
BDL
51
1
0
26 Sep 2024
Embracing Diversity: Interpretable Zero-shot classification beyond one vector per class
Mazda Moayeri
Michael G. Rabbat
Mark Ibrahim
Diane Bouchacourt
VLM
52
1
0
25 Apr 2024
Coarse-to-Fine Concept Bottleneck Models
Konstantinos P. Panousis
Dino Ienco
Diego Marcos
28
5
0
03 Oct 2023
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Semantic Understanding of Scenes through the ADE20K Dataset
Bolei Zhou
Hang Zhao
Xavier Puig
Tete Xiao
Sanja Fidler
Adela Barriuso
Antonio Torralba
SSeg
253
1,829
0
18 Aug 2016
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
296
39,217
0
01 Sep 2014
1