Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2209.11222
Cited By
Concept Activation Regions: A Generalized Framework For Concept-Based Explanations
22 September 2022
Jonathan Crabbé
M. Schaar
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Concept Activation Regions: A Generalized Framework For Concept-Based Explanations"
11 / 11 papers shown
Title
Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
M. Zarlenga
Gabriele Dominici
Pietro Barbiero
Z. Shams
M. Jamnik
KELM
158
0
0
24 Apr 2025
Interpretable Multimodal Learning for Tumor Protein-Metal Binding: Progress, Challenges, and Perspectives
Xiaokun Liu
Sayedmohammadreza Rastegari
Yijun Huang
Sxe Chang Cheong
Weikang Liu
...
Sina Tabakhi
Xianyuan Liu
Zheqing Zhu
Wei Sang
Haiping Lu
29
0
0
04 Apr 2025
On the Value of Labeled Data and Symbolic Methods for Hidden Neuron Activation Analysis
Abhilekha Dalal
R. Rayan
Adrita Barua
Eugene Y. Vasserman
Md Kamruzzaman Sarker
Pascal Hitzler
27
4
0
21 Apr 2024
Exploring the Lottery Ticket Hypothesis with Explainability Methods: Insights into Sparse Network Performance
Shantanu Ghosh
Kayhan Batmanghelich
30
0
0
07 Jul 2023
When are Post-hoc Conceptual Explanations Identifiable?
Tobias Leemann
Michael Kirchhof
Yao Rong
Enkelejda Kasneci
Gjergji Kasneci
50
10
0
28 Jun 2022
Navigating Neural Space: Revisiting Concept Activation Vectors to Overcome Directional Divergence
Frederik Pahde
Maximilian Dreyer
Leander Weber
Moritz Weckbecker
Christopher J. Anders
Thomas Wiegand
Wojciech Samek
Sebastian Lapuschkin
57
7
0
07 Feb 2022
Algorithmic Concept-based Explainable Reasoning
Dobrik Georgiev
Pietro Barbiero
Dmitry Kazhdan
Petar Velivcković
Pietro Lió
72
16
0
15 Jul 2021
What Do We Want From Explainable Artificial Intelligence (XAI)? -- A Stakeholder Perspective on XAI and a Conceptual Model Guiding Interdisciplinary XAI Research
Markus Langer
Daniel Oster
Timo Speith
Holger Hermanns
Lena Kästner
Eva Schmidt
Andreas Sesing
Kevin Baum
XAI
62
416
0
15 Feb 2021
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan Ö. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
122
297
0
17 Oct 2019
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
251
3,683
0
28 Feb 2017
SMOTE: Synthetic Minority Over-sampling Technique
Nitesh V. Chawla
Kevin W. Bowyer
Lawrence Hall
W. Kegelmeyer
AI4TS
163
25,247
0
09 Jun 2011
1