ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.16928
  4. Cited By
Learning to Receive Help: Intervention-Aware Concept Embedding Models

Learning to Receive Help: Intervention-Aware Concept Embedding Models

29 September 2023
M. Zarlenga
Katherine M. Collins
Krishnamurthy Dvijotham
Adrian Weller
Z. Shams
M. Jamnik
ArXivPDFHTML

Papers citing "Learning to Receive Help: Intervention-Aware Concept Embedding Models"

20 / 20 papers shown
Title
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
If Concept Bottlenecks are the Question, are Foundation Models the Answer?
Nicola Debole
Pietro Barbiero
Francesco Giannini
Andrea Passerini
Stefano Teso
Emanuele Marconato
442
1
0
28 Apr 2025
Leakage and Interpretability in Concept-Based Models
Leakage and Interpretability in Concept-Based Models
Enrico Parisini
Tapabrata Chakraborti
Chris Harbron
Ben D. MacArthur
Christopher R. S. Banerji
86
1
0
18 Apr 2025
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Shortcuts and Identifiability in Concept-based Models from a Neuro-Symbolic Lens
Samuele Bortolotti
Emanuele Marconato
Paolo Morettin
Andrea Passerini
Stefano Teso
147
5
0
16 Feb 2025
Label-Free Concept Bottleneck Models
Label-Free Concept Bottleneck Models
Tuomas P. Oikarinen
Subhro Das
Lam M. Nguyen
Tsui-Wei Weng
83
175
0
12 Apr 2023
Interactive Concept Bottleneck Models
Interactive Concept Bottleneck Models
Kushal Chauhan
Rishabh Tiwari
Jan Freyberg
Pradeep Shenoy
Krishnamurthy Dvijotham
51
55
0
14 Dec 2022
Encoding Concepts in Graph Neural Networks
Encoding Concepts in Graph Neural Networks
Lucie Charlotte Magister
Pietro Barbiero
Dmitry Kazhdan
F. Siciliano
Gabriele Ciravegna
Fabrizio Silvestri
M. Jamnik
Pietro Lio
66
21
0
27 Jul 2022
GlanceNets: Interpretabile, Leak-proof Concept-based Models
GlanceNets: Interpretabile, Leak-proof Concept-based Models
Emanuele Marconato
Andrea Passerini
Stefano Teso
159
68
0
31 May 2022
Promises and Pitfalls of Black-Box Concept Learning Models
Promises and Pitfalls of Black-Box Concept Learning Models
Anita Mahinpei
Justin Clark
Isaac Lage
Finale Doshi-Velez
Weiwei Pan
81
96
0
24 Jun 2021
Now You See Me (CME): Concept-based Model Extraction
Now You See Me (CME): Concept-based Model Extraction
Dmitry Kazhdan
B. Dimanov
M. Jamnik
Pietro Lio
Adrian Weller
46
75
0
25 Oct 2020
Concept Bottleneck Models
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
94
821
0
09 Jul 2020
Generative causal explanations of black-box classifiers
Generative causal explanations of black-box classifiers
Matthew R. O’Shaughnessy
Gregory H. Canal
Marissa Connor
Mark A. Davenport
Christopher Rozell
CML
58
73
0
24 Jun 2020
Concept Whitening for Interpretable Image Recognition
Concept Whitening for Interpretable Image Recognition
Zhi Chen
Yijie Bei
Cynthia Rudin
FAtt
70
320
0
05 Feb 2020
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan O. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
228
305
0
17 Oct 2019
Explaining Classifiers with Causal Concept Effect (CaCE)
Explaining Classifiers with Causal Concept Effect (CaCE)
Yash Goyal
Amir Feder
Uri Shalit
Been Kim
CML
73
177
0
16 Jul 2019
Towards Robust Interpretability with Self-Explaining Neural Networks
Towards Robust Interpretability with Self-Explaining Neural Networks
David Alvarez-Melis
Tommi Jaakkola
MILM
XAI
126
941
0
20 Jun 2018
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters
  in Deep Neural Networks
Net2Vec: Quantifying and Explaining how Concepts are Encoded by Filters in Deep Neural Networks
Ruth C. Fong
Andrea Vedaldi
FAtt
68
264
0
10 Jan 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
211
1,842
0
30 Nov 2017
Categorical Reparameterization with Gumbel-Softmax
Categorical Reparameterization with Gumbel-Softmax
Eric Jang
S. Gu
Ben Poole
BDL
317
5,364
0
03 Nov 2016
TensorFlow: A system for large-scale machine learning
TensorFlow: A system for large-scale machine learning
Martín Abadi
P. Barham
Jianmin Chen
Zhiwen Chen
Andy Davis
...
Vijay Vasudevan
Pete Warden
Martin Wicke
Yuan Yu
Xiaoqiang Zhang
GNN
AI4CE
433
18,350
0
27 May 2016
Batch Normalization: Accelerating Deep Network Training by Reducing
  Internal Covariate Shift
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe
Christian Szegedy
OOD
463
43,289
0
11 Feb 2015
1