141
0

Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts

Abstract

In this paper, we investigate how concept-based models (CMs) respond to out-of-distribution (OOD) inputs. CMs are interpretable neural architectures that first predict a set of high-level concepts (e.g., stripes, black) and then predict a task label from those concepts. In particular, we study the impact of concept interventions (i.e., operations where a human expert corrects a CM's mispredicted concepts at test time) on CMs' task predictions when inputs are OOD. Our analysis reveals a weakness in current state-of-the-art CMs, which we term leakage poisoning, that prevents them from properly improving their accuracy when intervened on for OOD inputs. To address this, we introduce MixCEM, a new CM that learns to dynamically exploit leaked information missing from its concepts only when this information is in-distribution. Our results across tasks with and without complete sets of concept annotations demonstrate that MixCEMs outperform strong baselines by significantly improving their accuracy for both in-distribution and OOD samples in the presence and absence of concept interventions.

View on arXiv
@article{zarlenga2025_2504.17921,
  title={ Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts },
  author={ Mateo Espinosa Zarlenga and Gabriele Dominici and Pietro Barbiero and Zohreh Shams and Mateja Jamnik },
  journal={arXiv preprint arXiv:2504.17921},
  year={ 2025 }
}
Comments on this paper