What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift

The growing adoption of artificial intelligence (AI) has amplified concerns about trustworthiness, including integrity, privacy, robustness, and bias. To assess and attribute these threats, we propose ConceptLens, a generic framework that leverages pre-trained multimodal models to identify the root causes of integrity threats by analyzing Concept Shift in probing samples. ConceptLens demonstrates strong detection performance for vanilla data poisoning attacks and uncovers vulnerabilities to bias injection, such as the generation of covert advertisements through malicious concept shifts. It identifies privacy risks in unaltered but high-risk samples, filters them before training, and provides insights into model weaknesses arising from incomplete or imbalanced training data. Additionally, at the model level, it attributes concepts that the target model is overly dependent on, identifies misleading concepts, and explains how disrupting key concepts negatively impacts the model. Furthermore, it uncovers sociological biases in generative content, revealing disparities across sociological contexts. Strikingly, ConceptLens reveals how safe training and inference data can be unintentionally and easily exploited, potentially undermining safety alignment. Our study informs actionable insights to breed trust in AI systems, thereby speeding adoption and driving greater innovation.
View on arXiv@article{chang2025_2504.21042, title={ What's Pulling the Strings? Evaluating Integrity and Attribution in AI Training and Inference through Concept Shift }, author={ Jiamin Chang and Haoyang Li and Hammond Pearce and Ruoxi Sun and Bo Li and Minhui Xue }, journal={arXiv preprint arXiv:2504.21042}, year={ 2025 } }