ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.11338
  4. Cited By
LLMs for Cyber Security: New Opportunities

LLMs for Cyber Security: New Opportunities

17 April 2024
D. Divakaran
Sai Teja Peddinti
ArXiv (abs)PDFHTML

Papers citing "LLMs for Cyber Security: New Opportunities"

29 / 29 papers shown
Title
Large Language Models for Cyber Security: A Systematic Literature Review
Large Language Models for Cyber Security: A Systematic Literature Review
HanXiang Xu
Shenao Wang
Ningke Li
Kaidi Wang
Yanjie Zhao
Kai Chen
Ting Yu
Yang Liu
Haoyu Wang
111
41
0
08 May 2024
AutoCodeRover: Autonomous Program Improvement
AutoCodeRover: Autonomous Program Improvement
Yuntong Zhang
Haifeng Ruan
Zhiyu Fan
Abhik Roychoudhury
95
68
0
08 Apr 2024
LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement
LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement
Nicholas Lee
Thanakul Wattanawong
Sehoon Kim
K. Mangalam
Sheng Shen
Gopala Anumanchipalli
Michael W. Mahoney
Kurt Keutzer
A. Gholami
95
52
0
22 Mar 2024
KnowPhish: Large Language Models Meet Multimodal Knowledge Graphs for
  Enhancing Reference-Based Phishing Detection
KnowPhish: Large Language Models Meet Multimodal Knowledge Graphs for Enhancing Reference-Based Phishing Detection
Yuexin Li
Chengyu Huang
Shumin Deng
Mei Lin Lock
Tri Cao
Nay Oo
Hoon Wei Lim
Bryan Hooi
94
20
0
04 Mar 2024
ChatSpamDetector: Leveraging Large Language Models for Effective
  Phishing Email Detection
ChatSpamDetector: Leveraging Large Language Models for Effective Phishing Email Detection
Takashi Koide
Naoki Fukushi
Hiroki Nakano
Daiki Chiba
81
32
0
28 Feb 2024
Exploring the Adversarial Capabilities of Large Language Models
Exploring the Adversarial Capabilities of Large Language Models
Lukas Struppek
Minh Hieu Le
Dominik Hintersdorf
Kristian Kersting
ELMAAML
52
4
0
14 Feb 2024
Scaling Up LLM Reviews for Google Ads Content Moderation
Scaling Up LLM Reviews for Google Ads Content Moderation
Wei Qiao
Tushar Dogra
Otilia Stretcu
Yu-Han Lyu
Tiantian Fang
...
Chih-Chun Chia
Ariel Fuxman
Fangzhou Wang
Ranjay Krishna
Mehmet Tek
62
13
0
07 Feb 2024
Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations
Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations
Hakan Inan
Kartikeya Upasani
Jianfeng Chi
Rashi Rungta
Krithika Iyer
...
Michael Tontchev
Qing Hu
Brian Fuller
Davide Testuggine
Madian Khabsa
AI4MH
161
459
0
07 Dec 2023
HuntGPT: Integrating Machine Learning-Based Anomaly Detection and
  Explainable AI with Large Language Models (LLMs)
HuntGPT: Integrating Machine Learning-Based Anomaly Detection and Explainable AI with Large Language Models (LLMs)
Tarek Ali
Panos Kostakos
67
41
0
27 Sep 2023
SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via
  Substitution
SurrogatePrompt: Bypassing the Safety Filter of Text-To-Image Models via Substitution
Zhongjie Ba
Jieming Zhong
Jiachen Lei
Pengyu Cheng
Qinglong Wang
Zhan Qin
Peng Kuang
Kui Ren
65
22
0
25 Sep 2023
Low-Quality Training Data Only? A Robust Framework for Detecting
  Encrypted Malicious Network Traffic
Low-Quality Training Data Only? A Robust Framework for Detecting Encrypted Malicious Network Traffic
Yuqi Qing
Qilei Yin
Xinhao Deng
Yihao Chen
Zhuotao Liu
Kun Sun
Ke Xu
Jia Zhang
Qi Li
AAML
97
17
0
09 Sep 2023
Certifying LLM Safety against Adversarial Prompting
Certifying LLM Safety against Adversarial Prompting
Aounon Kumar
Chirag Agarwal
Suraj Srinivas
Aaron Jiaxun Li
Soheil Feizi
Himabindu Lakkaraju
AAML
99
193
0
06 Sep 2023
Attacking logo-based phishing website detectors with adversarial
  perturbations
Attacking logo-based phishing website detectors with adversarial perturbations
Jehyun Lee
Zhe Xin
Melanie Ng Pei See
Kanav Sabharwal
Giovanni Apruzzese
D. Divakaran
AAML
57
8
0
18 Aug 2023
You Only Prompt Once: On the Capabilities of Prompt Learning on Large
  Language Models to Tackle Toxic Content
You Only Prompt Once: On the Capabilities of Prompt Learning on Large Language Models to Tackle Toxic Content
Xinlei He
Savvas Zannettou
Yun Shen
Yang Zhang
CLL
41
42
0
10 Aug 2023
Fuzz4All: Universal Fuzzing with Large Language Models
Fuzz4All: Universal Fuzzing with Large Language Models
Chun Xia
Matteo Paltenghi
Jia Le Tian
Michael Pradel
Lingming Zhang
ELM
67
119
0
09 Aug 2023
Guarding the Guardians: Automated Analysis of Online Child Sexual Abuse
Guarding the Guardians: Automated Analysis of Online Child Sexual Abuse
J. Puentes
Angela Castillo
Wilmar Osejo
Yuly Calderón
Viviana Quintero
L. Saldarriaga
D. Agudelo
Pablo Arbelaez
53
2
0
07 Aug 2023
MasterKey: Automated Jailbreak Across Multiple Large Language Model
  Chatbots
MasterKey: Automated Jailbreak Across Multiple Large Language Model Chatbots
Gelei Deng
Yi Liu
Yuekang Li
Kailong Wang
Ying Zhang
Zefeng Li
Haoyu Wang
Tianwei Zhang
Yang Liu
SILM
83
133
0
16 Jul 2023
Prompt Injection attack against LLM-integrated Applications
Prompt Injection attack against LLM-integrated Applications
Yi Liu
Gelei Deng
Yuekang Li
Kailong Wang
Zihao Wang
...
Tianwei Zhang
Yepang Liu
Haoyu Wang
Yanhong Zheng
Yang Liu
SILM
108
361
0
08 Jun 2023
LLM-powered Data Augmentation for Enhanced Cross-lingual Performance
LLM-powered Data Augmentation for Enhanced Cross-lingual Performance
Chenxi Whitehouse
Monojit Choudhury
Alham Fikri Aji
SyDaLRM
78
74
0
23 May 2023
Not what you've signed up for: Compromising Real-World LLM-Integrated
  Applications with Indirect Prompt Injection
Not what you've signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection
Kai Greshake
Sahar Abdelnabi
Shailesh Mishra
C. Endres
Thorsten Holz
Mario Fritz
SILM
131
497
0
23 Feb 2023
Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard
  Security Attacks
Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks
Daniel Kang
Xuechen Li
Ion Stoica
Carlos Guestrin
Matei A. Zaharia
Tatsunori Hashimoto
AAML
94
253
0
11 Feb 2023
A Watermark for Large Language Models
A Watermark for Large Language Models
John Kirchenbauer
Jonas Geiping
Yuxin Wen
Jonathan Katz
Ian Miers
Tom Goldstein
VLMWaLM
100
504
0
24 Jan 2023
Tuning Language Models as Training Data Generators for
  Augmentation-Enhanced Few-Shot Learning
Tuning Language Models as Training Data Generators for Augmentation-Enhanced Few-Shot Learning
Yu Meng
Martin Michalski
Jiaxin Huang
Yu Zhang
Tarek Abdelzaher
Jiawei Han
VLM
106
49
0
06 Nov 2022
Understanding HTML with Large Language Models
Understanding HTML with Large Language Models
Izzeddin Gur
Ofir Nachum
Yingjie Miao
Mustafa Safdari
Austin Huang
Aakanksha Chowdhery
Sharan Narang
Noah Fiedel
Aleksandra Faust
AI4CE
193
71
0
08 Oct 2022
Lost at C: A User Study on the Security Implications of Large Language
  Model Code Assistants
Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants
Gustavo Sandoval
Hammond Pearce
Teo Nys
Ramesh Karri
S. Garg
Brendan Dolan-Gavitt
ELM
67
96
0
20 Aug 2022
Just Fine-tune Twice: Selective Differential Privacy for Large Language
  Models
Just Fine-tune Twice: Selective Differential Privacy for Large Language Models
Weiyan Shi
Ryan Shea
Si-An Chen
Chiyuan Zhang
R. Jia
Zhou Yu
AAML
69
41
0
15 Apr 2022
Is GitHub's Copilot as Bad as Humans at Introducing Vulnerabilities in
  Code?
Is GitHub's Copilot as Bad as Humans at Introducing Vulnerabilities in Code?
Owura Asare
M. Nagappan
Nirmal Asokan
77
112
0
10 Apr 2022
Examining Zero-Shot Vulnerability Repair with Large Language Models
Examining Zero-Shot Vulnerability Repair with Large Language Models
Hammond Pearce
Benjamin Tan
Baleegh Ahmad
Ramesh Karri
Brendan Dolan-Gavitt
AAMLELM
69
208
0
03 Dec 2021
GEE: A Gradient-based Explainable Variational Autoencoder for Network
  Anomaly Detection
GEE: A Gradient-based Explainable Variational Autoencoder for Network Anomaly Detection
Q. Nguyen
Kar Wai Lim
D. Divakaran
K. H. Low
M. Chan
DRL
47
137
0
15 Mar 2019
1