If Concept Bottlenecks are the Question, are Foundation Models the Answer?

Concept Bottleneck Models (CBMs) are neural networks designed to conjoin high performance with ante-hoc interpretability. CBMs work by first mapping inputs (e.g., images) to high-level concepts (e.g., visible objects and their properties) and then use these to solve a downstream task (e.g., tagging or scoring an image) in an interpretable manner. Their performance and interpretability, however, hinge on the quality of the concepts they learn. The go-to strategy for ensuring good quality concepts is to leverage expert annotations, which are expensive to collect and seldom available in applications. Researchers have recently addressed this issue by introducing "VLM-CBM" architectures that replace manual annotations with weak supervision from foundation models. It is however unclear what is the impact of doing so on the quality of the learned concepts. To answer this question, we put state-of-the-art VLM-CBMs to the test, analyzing their learned concepts empirically using a selection of significant metrics. Our results show that, depending on the task, VLM supervision can sensibly differ from expert annotations, and that concept accuracy and quality are not strongly correlated. Our code is available atthis https URL.
View on arXiv@article{debole2025_2504.19774, title={ If Concept Bottlenecks are the Question, are Foundation Models the Answer? }, author={ Nicola Debole and Pietro Barbiero and Francesco Giannini and Andrea Passerini and Stefano Teso and Emanuele Marconato }, journal={arXiv preprint arXiv:2504.19774}, year={ 2025 } }