ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2007.04612
  4. Cited By
Concept Bottleneck Models

Concept Bottleneck Models

9 July 2020
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
ArXivPDFHTML

Papers citing "Concept Bottleneck Models"

50 / 157 papers shown
Title
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language
  Classifier Uses Sentiment Information
Towards Procedural Fairness: Uncovering Biases in How a Toxic Language Classifier Uses Sentiment Information
I. Nejadgholi
Esma Balkir
Kathleen C. Fraser
S. Kiritchenko
40
3
0
19 Oct 2022
TCNL: Transparent and Controllable Network Learning Via Embedding
  Human-Guided Concepts
TCNL: Transparent and Controllable Network Learning Via Embedding Human-Guided Concepts
Zhihao Wang
Chuang Zhu
24
1
0
07 Oct 2022
Causal Proxy Models for Concept-Based Model Explanations
Causal Proxy Models for Concept-Based Model Explanations
Zhengxuan Wu
Karel DÓosterlinck
Atticus Geiger
Amir Zur
Christopher Potts
MILM
77
35
0
28 Sep 2022
Global Concept-Based Interpretability for Graph Neural Networks via
  Neuron Analysis
Global Concept-Based Interpretability for Graph Neural Networks via Neuron Analysis
Xuanyuan Han
Pietro Barbiero
Dobrik Georgiev
Lucie Charlotte Magister
Pietro Lió
MILM
37
41
0
22 Aug 2022
Visual Interpretable and Explainable Deep Learning Models for Brain
  Tumor MRI and COVID-19 Chest X-ray Images
Visual Interpretable and Explainable Deep Learning Models for Brain Tumor MRI and COVID-19 Chest X-ray Images
Yusuf Brima
M. Atemkeng
FAtt
MedIm
29
0
0
01 Aug 2022
Leveraging Explanations in Interactive Machine Learning: An Overview
Leveraging Explanations in Interactive Machine Learning: An Overview
Stefano Teso
Öznur Alkan
Wolfgang Stammer
Elizabeth M. Daly
XAI
FAtt
LRM
26
62
0
29 Jul 2022
When are Post-hoc Conceptual Explanations Identifiable?
When are Post-hoc Conceptual Explanations Identifiable?
Tobias Leemann
Michael Kirchhof
Yao Rong
Enkelejda Kasneci
Gjergji Kasneci
50
10
0
28 Jun 2022
Auditing Visualizations: Transparency Methods Struggle to Detect
  Anomalous Behavior
Auditing Visualizations: Transparency Methods Struggle to Detect Anomalous Behavior
Jean-Stanislas Denain
Jacob Steinhardt
AAML
42
7
0
27 Jun 2022
C-SENN: Contrastive Self-Explaining Neural Network
C-SENN: Contrastive Self-Explaining Neural Network
Yoshihide Sawada
Keigo Nakamura
SSL
16
8
0
20 Jun 2022
GlanceNets: Interpretabile, Leak-proof Concept-based Models
GlanceNets: Interpretabile, Leak-proof Concept-based Models
Emanuele Marconato
Andrea Passerini
Stefano Teso
106
64
0
31 May 2022
Post-hoc Concept Bottleneck Models
Post-hoc Concept Bottleneck Models
Mert Yuksekgonul
Maggie Wang
James Zou
145
185
0
31 May 2022
Neural Basis Models for Interpretability
Neural Basis Models for Interpretability
Filip Radenovic
Abhimanyu Dubey
D. Mahajan
FAtt
32
46
0
27 May 2022
Learnable Visual Words for Interpretable Image Recognition
Learnable Visual Words for Interpretable Image Recognition
Wenxi Xiao
Zhengming Ding
Hongfu Liu
VLM
25
2
0
22 May 2022
Leveraging Relational Information for Learning Weakly Disentangled
  Representations
Leveraging Relational Information for Learning Weakly Disentangled Representations
Andrea Valenti
D. Bacciu
CoGe
DRL
29
5
0
20 May 2022
Clinical outcome prediction under hypothetical interventions -- a
  representation learning framework for counterfactual reasoning
Clinical outcome prediction under hypothetical interventions -- a representation learning framework for counterfactual reasoning
Yikuan Li
M. Mamouei
Shishir Rao
A. Hassaine
D. Canoy
Thomas Lukasiewicz
K. Rahimi
G. Salimi-Khorshidi
OOD
CML
AI4CE
31
1
0
15 May 2022
Perspectives on Incorporating Expert Feedback into Model Updates
Perspectives on Incorporating Expert Feedback into Model Updates
Valerie Chen
Umang Bhatt
Hoda Heidari
Adrian Weller
Ameet Talwalkar
30
11
0
13 May 2022
Adapting and Evaluating Influence-Estimation Methods for
  Gradient-Boosted Decision Trees
Adapting and Evaluating Influence-Estimation Methods for Gradient-Boosted Decision Trees
Jonathan Brophy
Zayd Hammoudeh
Daniel Lowd
TDI
27
22
0
30 Apr 2022
RelViT: Concept-guided Vision Transformer for Visual Relational
  Reasoning
RelViT: Concept-guided Vision Transformer for Visual Relational Reasoning
Xiaojian Ma
Weili Nie
Zhiding Yu
Huaizu Jiang
Chaowei Xiao
Yuke Zhu
Song-Chun Zhu
Anima Anandkumar
ViT
LRM
30
19
0
24 Apr 2022
ConceptExplainer: Interactive Explanation for Deep Neural Networks from
  a Concept Perspective
ConceptExplainer: Interactive Explanation for Deep Neural Networks from a Concept Perspective
Jinbin Huang
Aditi Mishra
Bum Chul Kwon
Chris Bryan
FAtt
HAI
43
31
0
04 Apr 2022
Do Vision-Language Pretrained Models Learn Composable Primitive
  Concepts?
Do Vision-Language Pretrained Models Learn Composable Primitive Concepts?
Tian Yun
Usha Bhalla
Ellie Pavlick
Chen Sun
ReLM
CoGe
VLM
LRM
31
23
0
31 Mar 2022
Concept Embedding Analysis: A Review
Concept Embedding Analysis: A Review
Gesina Schwalbe
32
28
0
25 Mar 2022
SemSup: Semantic Supervision for Simple and Scalable Zero-shot
  Generalization
SemSup: Semantic Supervision for Simple and Scalable Zero-shot Generalization
Austin W. Hanjie
A. Deshpande
Karthik Narasimhan
VLM
36
2
0
26 Feb 2022
Human-Centered Concept Explanations for Neural Networks
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
37
25
0
25 Feb 2022
Towards Disentangling Information Paths with Coded ResNeXt
Towards Disentangling Information Paths with Coded ResNeXt
Apostolos Avranas
Marios Kountouris
FAtt
17
1
0
10 Feb 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
19
1
0
30 Jan 2022
Deeply Explain CNN via Hierarchical Decomposition
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip H. S. Torr
FAtt
53
15
0
23 Jan 2022
HIVE: Evaluating the Human Interpretability of Visual Explanations
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
66
114
0
06 Dec 2021
Editing a classifier by rewriting its prediction rules
Editing a classifier by rewriting its prediction rules
Shibani Santurkar
Dimitris Tsipras
Mahalaxmi Elango
David Bau
Antonio Torralba
A. Madry
KELM
180
89
0
02 Dec 2021
Image Classification with Consistent Supporting Evidence
Image Classification with Consistent Supporting Evidence
Peiqi Wang
Ruizhi Liao
Daniel Moyer
Seth Berkowitz
Steven Horng
Polina Golland
42
2
0
13 Nov 2021
Self-Interpretable Model with TransformationEquivariant Interpretation
Self-Interpretable Model with TransformationEquivariant Interpretation
Yipei Wang
Xiaoqian Wang
38
23
0
09 Nov 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A
  Review of Interpretability Methods
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
18
302
0
01 Nov 2021
Self-explaining Neural Network with Concept-based Explanations for ICU
  Mortality Prediction
Self-explaining Neural Network with Concept-based Explanations for ICU Mortality Prediction
Sayantan Kumar
Sean C. Yu
Thomas Kannampallil
Zachary B. Abrams
Andrew Michelson
Philip R. O. Payne
FAtt
11
7
0
09 Oct 2021
Toward a Unified Framework for Debugging Concept-based Models
Toward a Unified Framework for Debugging Concept-based Models
A. Bontempelli
Fausto Giunchiglia
Andrea Passerini
Stefano Teso
20
4
0
23 Sep 2021
A Framework for Learning Ante-hoc Explainable Models via Concepts
A Framework for Learning Ante-hoc Explainable Models via Concepts
Anirban Sarkar
Deepak Vijaykeerthy
Anindya Sarkar
V. Balasubramanian
LRM
BDL
22
46
0
25 Aug 2021
Logic Explained Networks
Logic Explained Networks
Gabriele Ciravegna
Pietro Barbiero
Francesco Giannini
Marco Gori
Pietro Lió
Marco Maggini
S. Melacci
37
69
0
11 Aug 2021
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph
  Neural Networks
GCExplainer: Human-in-the-Loop Concept-based Explanations for Graph Neural Networks
Lucie Charlotte Magister
Dmitry Kazhdan
Vikash Singh
Pietro Lió
32
48
0
25 Jul 2021
Promises and Pitfalls of Black-Box Concept Learning Models
Promises and Pitfalls of Black-Box Concept Learning Models
Anita Mahinpei
Justin Clark
Isaac Lage
Finale Doshi-Velez
Weiwei Pan
31
91
0
24 Jun 2021
Entropy-based Logic Explanations of Neural Networks
Entropy-based Logic Explanations of Neural Networks
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
Pietro Lió
Marco Gori
S. Melacci
FAtt
XAI
25
78
0
12 Jun 2021
Rationalization through Concepts
Rationalization through Concepts
Diego Antognini
Boi Faltings
FAtt
27
19
0
11 May 2021
Do Concept Bottleneck Models Learn as Intended?
Do Concept Bottleneck Models Learn as Intended?
Andrei Margeloiu
Matthew Ashman
Umang Bhatt
Yanzhi Chen
M. Jamnik
Adrian Weller
SLR
17
91
0
10 May 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
21
82
0
26 Apr 2021
Failing Conceptually: Concept-Based Explanations of Dataset Shift
Failing Conceptually: Concept-Based Explanations of Dataset Shift
Maleakhi A. Wijaya
Dmitry Kazhdan
B. Dimanov
M. Jamnik
9
7
0
18 Apr 2021
Deep Interpretable Models of Theory of Mind
Deep Interpretable Models of Theory of Mind
Ini Oguntola
Dana Hughes
Katia P. Sycara
HAI
33
23
0
07 Apr 2021
Interpretable Machine Learning: Fundamental Principles and 10 Grand
  Challenges
Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges
Cynthia Rudin
Chaofan Chen
Zhi Chen
Haiyang Huang
Lesia Semenova
Chudi Zhong
FaML
AI4CE
LRM
59
653
0
20 Mar 2021
An overview of artificial intelligence techniques for diagnosis of
  Schizophrenia based on magnetic resonance imaging modalities: Methods,
  challenges, and future works
An overview of artificial intelligence techniques for diagnosis of Schizophrenia based on magnetic resonance imaging modalities: Methods, challenges, and future works
Delaram Sadeghi
A. Shoeibi
Navid Ghassemi
Parisa Moridian
Ali Khadem
...
Juan M Gorriz
F. Khozeimeh
Yu-Dong Zhang
S. Nahavandi
U. Acharya
19
105
0
24 Feb 2021
NuCLS: A scalable crowdsourcing, deep learning approach and dataset for
  nucleus classification, localization and segmentation
NuCLS: A scalable crowdsourcing, deep learning approach and dataset for nucleus classification, localization and segmentation
M. Amgad
Lamees A. Atteya
Hagar Hussein
K. Mohammed
Ehab Hafiz
...
Critical Care
David Manthey
Atlanta
D. Neurology
Lurie Cancer Center
34
73
0
18 Feb 2021
Progressive Interpretation Synthesis: Interpreting Task Solving by
  Quantifying Previously Used and Unused Information
Progressive Interpretation Synthesis: Interpreting Task Solving by Quantifying Previously Used and Unused Information
Zhengqi He
Taro Toyoizumi
19
1
0
08 Jan 2021
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
24
18
0
10 Dec 2020
Interpretability and Explainability: A Machine Learning Zoo Mini-tour
Interpretability and Explainability: A Machine Learning Zoo Mini-tour
Ricards Marcinkevics
Julia E. Vogt
XAI
28
119
0
03 Dec 2020
Right for the Right Concept: Revising Neuro-Symbolic Concepts by
  Interacting with their Explanations
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Wolfgang Stammer
P. Schramowski
Kristian Kersting
FAtt
14
107
0
25 Nov 2020
Previous
1234
Next