Towards Reasonable Concept Bottleneck Models
- LRM
In this paper, we propose oncept soning odels (CREAM), a novel family of Concept Bottleneck Models (CBMs) that: (i) explicitly encodes concept-concept () and concept-task ({\texttt{C\rightarrowY}}) relationships to enforce a desired model reasoning; and (ii) use a regularized side-channel to achieve competitive task performance, while keeping high concept importance. Specifically, CREAM architecturally embeds (bi)directed concept-concept, and concept to task relationships specified by a human expert, while severing undesired information flows (e.g., to handle mutually exclusive concepts). Moreover, CREAM integrates a black-box side-channel that is regularized to encourage task predictions to be grounded in the relevant concepts, thereby utilizing the side-channel only when necessary to enhance performance. Our experiments show that: (i) CREAM mainly relies on concepts while achieving task performance on par with black-box models; and (ii) the embedded and {\texttt{C\rightarrowY}} relationships ease model interventions and mitigate concept leakage.
View on arXiv@article{kalampalikis2025_2506.05014, title={ Towards Reasonable Concept Bottleneck Models }, author={ Nektarios Kalampalikis and Kavya Gupta and Georgi Vitanov and Isabel Valera }, journal={arXiv preprint arXiv:2506.05014}, year={ 2025 } }