ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2312.12936
  4. Cited By
Concept-based Explainable Artificial Intelligence: A Survey

Concept-based Explainable Artificial Intelligence: A Survey

20 December 2023
Eleonora Poeta
Gabriele Ciravegna
Eliana Pastor
Tania Cerquitelli
Elena Baralis
    LRM
    XAI
ArXivPDFHTML

Papers citing "Concept-based Explainable Artificial Intelligence: A Survey"

17 / 17 papers shown
Title
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
137
0
0
28 Apr 2025
Addressing Concept Mislabeling in Concept Bottleneck Models Through Preference Optimization
Addressing Concept Mislabeling in Concept Bottleneck Models Through Preference Optimization
Emiliano Penaloza
Tianyue H. Zhan
Laurent Charlin
Mateo Espinosa Zarlenga
51
0
0
25 Apr 2025
Representational Similarity via Interpretable Visual Concepts
Representational Similarity via Interpretable Visual Concepts
Neehar Kondapaneni
Oisin Mac Aodha
Pietro Perona
DRL
166
0
0
19 Mar 2025
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Samuele Bortolotti
Emanuele Marconato
Paolo Morettin
Andrea Passerini
Stefano Teso
61
2
0
16 Feb 2025
Generating Counterfactual Trajectories with Latent Diffusion Models for Concept Discovery
Generating Counterfactual Trajectories with Latent Diffusion Models for Concept Discovery
Payal Varshney
Adriano Lucieri
Christoph Balada
Andreas Dengel
Sheraz Ahmed
MedIm
DiffM
53
4
0
16 Apr 2024
Understanding Multimodal Deep Neural Networks: A Concept Selection View
Understanding Multimodal Deep Neural Networks: A Concept Selection View
Chenming Shang
Hengyuan Zhang
Hao Wen
Yujiu Yang
43
5
0
13 Apr 2024
Interpretable Neural-Symbolic Concept Reasoning
Interpretable Neural-Symbolic Concept Reasoning
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
M. Zarlenga
Lucie Charlotte Magister
Alberto Tonda
Pietro Lio'
F. Precioso
M. Jamnik
G. Marra
NAI
LRM
61
38
0
27 Apr 2023
Disentangled Explanations of Neural Network Predictions by Finding
  Relevant Subspaces
Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces
Pattarawat Chormai
J. Herrmann
Klaus-Robert Muller
G. Montavon
FAtt
48
17
0
30 Dec 2022
Causal Proxy Models for Concept-Based Model Explanations
Causal Proxy Models for Concept-Based Model Explanations
Zhengxuan Wu
Karel DÓosterlinck
Atticus Geiger
Amir Zur
Christopher Potts
MILM
77
35
0
28 Sep 2022
Concept Activation Regions: A Generalized Framework For Concept-Based
  Explanations
Concept Activation Regions: A Generalized Framework For Concept-Based Explanations
Jonathan Crabbé
M. Schaar
59
46
0
22 Sep 2022
When are Post-hoc Conceptual Explanations Identifiable?
When are Post-hoc Conceptual Explanations Identifiable?
Tobias Leemann
Michael Kirchhof
Yao Rong
Enkelejda Kasneci
Gjergji Kasneci
50
10
0
28 Jun 2022
GlanceNets: Interpretabile, Leak-proof Concept-based Models
GlanceNets: Interpretabile, Leak-proof Concept-based Models
Emanuele Marconato
Andrea Passerini
Stefano Teso
106
64
0
31 May 2022
Post-hoc Concept Bottleneck Models
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Zou
145
185
0
31 May 2022
Interpretable Image Classification with Differentiable Prototypes
  Assignment
Interpretable Image Classification with Differentiable Prototypes Assignment
Dawid Rymarczyk
Lukasz Struski
Michal Górszczak
K. Lewandowska
Jacek Tabor
Bartosz Zieliñski
34
99
0
06 Dec 2021
Making Corgis Important for Honeycomb Classification: Adversarial
  Attacks on Concept-based Explainability Tools
Making Corgis Important for Honeycomb Classification: Adversarial Attacks on Concept-based Explainability Tools
Davis Brown
Henry Kvinge
AAML
45
7
0
14 Oct 2021
On Interpretability of Deep Learning based Skin Lesion Classifiers using
  Concept Activation Vectors
On Interpretability of Deep Learning based Skin Lesion Classifiers using Concept Activation Vectors
Adriano Lucieri
Muhammad Naseer Bajwa
S. Braun
M. I. Malik
Andreas Dengel
Sheraz Ahmed
MedIm
163
64
0
05 May 2020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
1