The Dark Side of Digital Twins: Adversarial Attacks on AI-Driven Water Forecasting

Digital twins (DTs) are improving water distribution systems by using real-time data, analytics, and prediction models to optimize operations. This paper presents a DT platform designed for a Spanish water supply network, utilizing Long Short-Term Memory (LSTM) networks to predict water consumption. However, machine learning models are vulnerable to adversarial attacks, such as the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). These attacks manipulate critical model parameters, injecting subtle distortions that degrade forecasting accuracy. To further exploit these vulnerabilities, we introduce a Learning Automata (LA) and Random LA-based approach that dynamically adjusts perturbations, making adversarial attacks more difficult to detect. Experimental results show that this approach significantly impacts prediction reliability, causing the Mean Absolute Percentage Error (MAPE) to rise from 26% to over 35%. Moreover, adaptive attack strategies amplify this effect, highlighting cybersecurity risks in AI-driven DTs. These findings emphasize the urgent need for robust defenses, including adversarial training, anomaly detection, and secure data pipelines.
View on arXiv@article{homaei2025_2504.20295, title={ The Dark Side of Digital Twins: Adversarial Attacks on AI-Driven Water Forecasting }, author={ Mohammadhossein Homaei and Victor Gonzalez Morales and Oscar Mogollon-Gutierrez and Andres Caro }, journal={arXiv preprint arXiv:2504.20295}, year={ 2025 } }