ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2301.12487
41
0

Mitigating Adversarial Effects of False Data Injection Attacks in Power Grid

29 January 2023
Farhin Farhad Riya
Shahinul Hoque
Jinyuan Stella Sun
Jiangnan Li
Hairong Qi
Hairong Qi
    AAML
    AI4CE
ArXivPDFHTML
Abstract

Deep Neural Networks have proven to be highly accurate at a variety of tasks in recent years. The benefits of Deep Neural Networks have also been embraced in power grids to detect False Data Injection Attacks (FDIA) while conducting critical tasks like state estimation. However, the vulnerabilities of DNNs along with the distinct infrastructure of the cyber-physical-system (CPS) can favor the attackers to bypass the detection mechanism. Moreover, the divergent nature of CPS engenders limitations to the conventional defense mechanisms for False Data Injection Attacks. In this paper, we propose a DNN framework with an additional layer that utilizes randomization to mitigate the adversarial effect by padding the inputs. The primary advantage of our method is when deployed to a DNN model it has a trivial impact on the model's performance even with larger padding sizes. We demonstrate the favorable outcome of the framework through simulation using the IEEE 14-bus, 30-bus, 118-bus, and 300-bus systems. Furthermore to justify the framework we select attack techniques that generate subtle adversarial examples that can bypass the detection mechanism effortlessly.

View on arXiv
@article{riya2025_2301.12487,
  title={ Mitigating Adversarial Effects of False Data Injection Attacks in Power Grid },
  author={ Farhin Farhad Riya and Shahinul Hoque and Yingyuan Yang and Jiangnan Li and Jinyuan Stella Sun and Hairong Qi },
  journal={arXiv preprint arXiv:2301.12487},
  year={ 2025 }
}
Comments on this paper