Prompting Large Language Models for Training-Free Non-Intrusive Load Monitoring

Non-intrusive Load Monitoring (NILM) aims to disaggregate aggregate household electricity consumption into individual appliance usage, enabling more effective energy management. While deep learning has advanced NILM, it remains limited by its dependence on labeled data, restricted generalization, and lack of interpretability. In this paper, we introduce the first prompt-based NILM framework that leverages Large Language Models (LLMs) with in-context learning. We design and evaluate prompt strategies that integrate appliance features, timestamps and contextual information, as well as representative time-series examples, using the REDD dataset. With optimized prompts, LLMs achieve competitive state detection accuracy, reaching an average F1-score of 0.676 on unseen households, and demonstrate robust generalization without the need for fine-tuning. LLMs also enhance interpretability by providing clear, human-readable explanations for their predictions. Our results show that LLMs can reduce data requirements, improve adaptability, and provide transparent energy disaggregation in NILM applications.
View on arXiv@article{xue2025_2505.06330, title={ Prompting Large Language Models for Training-Free Non-Intrusive Load Monitoring }, author={ Junyu Xue and Xudong Wang and Xiaoling He and Shicheng Liu and Yi Wang and Guoming Tang }, journal={arXiv preprint arXiv:2505.06330}, year={ 2025 } }