ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.06237
  4. Cited By
Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation
v1v2v3 (latest)

Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation

10 May 2024
Jiangnan Li
Yingyuan Yang
Jinyuan Stella Sun
ArXiv (abs)PDFHTML

Papers citing "Risks of Practicing Large Language Models in Smart Grid: Threat Modeling and Validation"

20 / 20 papers shown
Title
A Comprehensive Survey on the Security of Smart Grid: Challenges,
  Mitigations, and Future Research Opportunities
A Comprehensive Survey on the Security of Smart Grid: Challenges, Mitigations, and Future Research Opportunities
Arastoo Zibaeirad
Farnoosh Koleini
Shengping Bi
Tao Hou
Tao Wang
AAML
74
16
0
10 Jul 2024
Jailbreak Attacks and Defenses Against Large Language Models: A Survey
Jailbreak Attacks and Defenses Against Large Language Models: A Survey
Sibo Yi
Yule Liu
Zhen Sun
Tianshuo Cong
Xinlei He
Jiaxing Song
Ke Xu
Qi Li
AAML
98
110
0
05 Jul 2024
Large Language Models for Power Scheduling: A User-Centric Approach
Large Language Models for Power Scheduling: A User-Centric Approach
T. Mongaillard
S. Lasaulce
Othman Hicheur
Chao Zhang
Lina Bariah
V. Varma
Hang Zou
Qiyang Zhao
Mérouane Debbah
68
14
0
29 Jun 2024
Applying Fine-Tuned LLMs for Reducing Data Needs in Load Profile
  Analysis
Applying Fine-Tuned LLMs for Reducing Data Needs in Load Profile Analysis
Yi Hu
Hyeonjin Kim
Kai Ye
Ning Lu
75
5
0
02 Jun 2024
Prompt Stealing Attacks Against Large Language Models
Prompt Stealing Attacks Against Large Language Models
Zeyang Sha
Yang Zhang
SILMAAML
98
35
0
20 Feb 2024
Large Foundation Models for Power Systems
Large Foundation Models for Power Systems
Chenghao Huang
Siyang Li
Ruohong Liu
Hao Wang
Yize Chen
AI4CE
45
33
0
12 Dec 2023
Applying Large Language Models to Power Systems: Potential Security
  Threats
Applying Large Language Models to Power Systems: Potential Security Threats
Jiaqi Ruan
Gaoqi Liang
Huan Zhao
Guolong Liu
Xianzhuo Sun
Jing Qiu
Zhao Xu
Fushuan Wen
Z. Dong
PILM
59
25
0
22 Nov 2023
Survey of Vulnerabilities in Large Language Models Revealed by
  Adversarial Attacks
Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks
Erfan Shayegani
Md Abdullah Al Mamun
Yu Fu
Pedram Zaree
Yue Dong
Nael B. Abu-Ghazaleh
AAML
221
163
0
16 Oct 2023
Prompt Injection attack against LLM-integrated Applications
Prompt Injection attack against LLM-integrated Applications
Yi Liu
Gelei Deng
Yuekang Li
Kailong Wang
Zihao Wang
...
Tianwei Zhang
Yepang Liu
Haoyu Wang
Yanhong Zheng
Yang Liu
SILM
114
361
0
08 Jun 2023
Beyond the Safeguards: Exploring the Security Risks of ChatGPT
Beyond the Safeguards: Exploring the Security Risks of ChatGPT
Erik Derner
Kristina Batistic
SILM
74
67
0
13 May 2023
Multi-step Jailbreaking Privacy Attacks on ChatGPT
Multi-step Jailbreaking Privacy Attacks on ChatGPT
Haoran Li
Dadi Guo
Wei Fan
Mingshi Xu
Jie Huang
Fanpu Meng
Yangqiu Song
SILM
98
347
0
11 Apr 2023
A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models
A Comprehensive Capability Analysis of GPT-3 and GPT-3.5 Series Models
Junjie Ye
Xuanting Chen
Nuo Xu
Can Zu
Zekai Shao
...
Jie Zhou
Siming Chen
Tao Gui
Qi Zhang
Xuanjing Huang
ELM
59
333
0
18 Mar 2023
GPT-4 Technical Report
GPT-4 Technical Report
OpenAI OpenAI
OpenAI Josh Achiam
Steven Adler
Sandhini Agarwal
Lama Ahmad
...
Shengjia Zhao
Tianhao Zheng
Juntang Zhuang
William Zhuk
Barret Zoph
LLMAGMLLM
1.5K
14,699
0
15 Mar 2023
Lost at C: A User Study on the Security Implications of Large Language
  Model Code Assistants
Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants
Gustavo Sandoval
Hammond Pearce
Teo Nys
Ramesh Karri
S. Garg
Brendan Dolan-Gavitt
ELM
67
96
0
20 Aug 2022
Towards Adversarial-Resilient Deep Neural Networks for False Data
  Injection Attack Detection in Power Grids
Towards Adversarial-Resilient Deep Neural Networks for False Data Injection Attack Detection in Power Grids
Jiangnan Li
Yingyuan Yang
Jinyuan Stella Sun
K. Tomsovic
Hairong Qi
AAML
106
15
0
17 Feb 2021
SearchFromFree: Adversarial Measurements for Machine Learning-based
  Energy Theft Detection
SearchFromFree: Adversarial Measurements for Machine Learning-based Energy Theft Detection
Jiangnan Li
Yingyuan Yang
Jinyuan Stella Sun
AAML
57
20
0
02 Jun 2020
ConAML: Constrained Adversarial Machine Learning for Cyber-Physical
  Systems
ConAML: Constrained Adversarial Machine Learning for Cyber-Physical Systems
Jiangnan Li
Yingyuan Yang
Jinyuan Stella Sun
K. Tomsovic
Jin Young Lee
AAML
91
55
0
12 Mar 2020
Dynamic Detection of False Data Injection Attack in Smart Grid using
  Deep Learning
Dynamic Detection of False Data Injection Attack in Smart Grid using Deep Learning
Xiangyu Niu
Jinyuan Stella Sun
35
96
0
03 Aug 2018
Machine Learning Methods for Attack Detection in the Smart Grid
Machine Learning Methods for Attack Detection in the Smart Grid
Mete Ozay
I. Esnaola
Yarman Vural
Sanjeev R. Kulkarni
H. Vincent Poor
51
494
0
22 Mar 2015
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
282
14,963
1
21 Dec 2013
1