ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2211.03656
  4. Cited By
Towards learning to explain with concept bottleneck models: mitigating
  information leakage

Towards learning to explain with concept bottleneck models: mitigating information leakage

7 November 2022
J. Lockhart
Nicolas Marchesotti
Daniele Magazzeni
Manuela Veloso
ArXivPDFHTML

Papers citing "Towards learning to explain with concept bottleneck models: mitigating information leakage"

5 / 5 papers shown
Title
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
137
0
0
28 Apr 2025
Leakage and Interpretability in Concept-Based Models
Leakage and Interpretability in Concept-Based Models
Enrico Parisini
Tapabrata Chakraborti
Chris Harbron
Ben D. MacArthur
Christopher R. S. Banerji
40
0
0
18 Apr 2025
Learn to explain yourself, when you can: Equipping Concept Bottleneck
  Models with the ability to abstain on their concept predictions
Learn to explain yourself, when you can: Equipping Concept Bottleneck Models with the ability to abstain on their concept predictions
J. Lockhart
Daniele Magazzeni
Manuela Veloso
17
4
0
21 Nov 2022
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Dropout as a Bayesian Approximation: Representing Model Uncertainty in
  Deep Learning
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
Y. Gal
Zoubin Ghahramani
UQCV
BDL
285
9,138
0
06 Jun 2015
1