ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.01067
40
0

A Rusty Link in the AI Supply Chain: Detecting Evil Configurations in Model Repositories

2 May 2025
Ziqi Ding
Qian Fu
Junchen Ding
Gelei Deng
Yi Liu
Yuekang Li
ArXivPDFHTML
Abstract

Recent advancements in large language models (LLMs) have spurred the development of diverse AI applications from code generation and video editing to text generation; however, AI supply chains such as Hugging Face, which host pretrained models and their associated configuration files contributed by the public, face significant security challenges; in particular, configuration files originally intended to set up models by specifying parameters and initial settings can be exploited to execute unauthorized code, yet research has largely overlooked their security compared to that of the models themselves; in this work, we present the first comprehensive study of malicious configurations on Hugging Face, identifying three attack scenarios (file, website, and repository operations) that expose inherent risks; to address these threats, we introduce CONFIGSCAN, an LLM-based tool that analyzes configuration files in the context of their associated runtime code and critical libraries, effectively detecting suspicious elements with low false positive rates and high accuracy; our extensive evaluation uncovers thousands of suspicious repositories and configuration files, underscoring the urgent need for enhanced security validation in AI model hosting platforms.

View on arXiv
@article{ding2025_2505.01067,
  title={ A Rusty Link in the AI Supply Chain: Detecting Evil Configurations in Model Repositories },
  author={ Ziqi Ding and Qian Fu and Junchen Ding and Gelei Deng and Yi Liu and Yuekang Li },
  journal={arXiv preprint arXiv:2505.01067},
  year={ 2025 }
}
Comments on this paper