Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2211.03656
Cited By
Towards learning to explain with concept bottleneck models: mitigating information leakage
7 November 2022
J. Lockhart
Nicolas Marchesotti
Daniele Magazzeni
Manuela Veloso
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards learning to explain with concept bottleneck models: mitigating information leakage"
5 / 5 papers shown
Title
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
137
0
0
28 Apr 2025
Leakage and Interpretability in Concept-Based Models
Enrico Parisini
Tapabrata Chakraborti
Chris Harbron
Ben D. MacArthur
Christopher R. S. Banerji
40
0
0
18 Apr 2025
Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions
J. Lockhart
Daniele Magazzeni
Manuela Veloso
17
4
0
21 Nov 2022
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,138
0
06 Jun 2015
1