Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2309.07742
Cited By
Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning
14 September 2023
Emanuele Marconato
Andrea Passerini
Stefano Teso
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Interpretability is in the Mind of the Beholder: A Causal Framework for Human-interpretable Representation Learning"
16 / 16 papers shown
Title
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
137
0
0
28 Apr 2025
Interpretable Machine Learning in Physics: A Review
Sebastian Johann Wetzel
Seungwoong Ha
Raban Iten
Miriam Klopotek
Ziming Liu
AI4CE
80
0
0
30 Mar 2025
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Samuele Bortolotti
Emanuele Marconato
Paolo Morettin
Andrea Passerini
Stefano Teso
61
2
0
16 Feb 2025
Sample-efficient Learning of Concepts with Theoretical Guarantees: from Data to Concepts without Interventions
H. Fokkema
T. Erven
Sara Magliacane
70
1
0
10 Feb 2025
Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?
Jae Hee Lee
Georgii Mikriukov
Gesina Schwalbe
Stefan Wermter
D. Wolter
52
2
0
20 Sep 2024
Learning Causal Abstractions of Linear Structural Causal Models
Riccardo Massidda
Sara Magliacane
Davide Bacciu
CML
52
2
0
01 Jun 2024
BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts
Emanuele Marconato
Samuele Bortolotti
Emile van Krieken
Antonio Vergari
Andrea Passerini
Stefano Teso
41
18
0
19 Feb 2024
Finding Alignments Between Interpretable Causal Variables and Distributed Neural Representations
Atticus Geiger
Zhengxuan Wu
Christopher Potts
Thomas F. Icard
Noah D. Goodman
CML
75
98
0
05 Mar 2023
GlanceNets: Interpretabile, Leak-proof Concept-based Models
Emanuele Marconato
Andrea Passerini
Stefano Teso
106
64
0
31 May 2022
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Y. Zou
145
185
0
31 May 2022
Interactive Disentanglement: Learning Concepts by Interacting with their Prototype Representations
Wolfgang Stammer
Marius Memmel
P. Schramowski
Kristian Kersting
91
26
0
04 Dec 2021
Coherent Hierarchical Multi-Label Classification Networks
Eleonora Giunchiglia
Thomas Lukasiewicz
AILaw
37
96
0
20 Oct 2020
Conditional Gaussian Distribution Learning for Open Set Recognition
Xin Sun
Zhen Yang
Chi Zhang
Guohao Peng
K. Ling
BDL
UQCV
155
216
0
19 Mar 2020
Weakly-Supervised Disentanglement Without Compromises
Francesco Locatello
Ben Poole
Gunnar Rätsch
Bernhard Schölkopf
Olivier Bachem
Michael Tschannen
CoGe
OOD
DRL
184
313
0
07 Feb 2020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Logic Tensor Networks for Semantic Image Interpretation
Ivan Donadello
Luciano Serafini
Artur Garcez
59
209
0
24 May 2017
1