ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22010
  4. Cited By
VulBinLLM: LLM-powered Vulnerability Detection for Stripped Binaries

VulBinLLM: LLM-powered Vulnerability Detection for Stripped Binaries

28 May 2025
Nasir Hussain
Haohan Chen
Chanh Tran
Philip Huang
Zhuohao Li
Pravir Chugh
William Chen
Ashish Kundu
Yuan Tian
ArXivPDFHTML

Papers citing "VulBinLLM: LLM-powered Vulnerability Detection for Stripped Binaries"

12 / 12 papers shown
Title
Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM
  Agents
Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM Agents
Yashar Talebirad
Amirhossein Nadiri
LLMAG
64
209
0
05 Jun 2023
NeuDep: Neural Binary Memory Dependence Analysis
NeuDep: Neural Binary Memory Dependence Analysis
Kexin Pei
Dongdong She
Michael Wang
Scott Geng
Zhou Xuan
Yaniv David
Junfeng Yang
Suman Jana
Baishakhi Ray
44
6
0
04 Oct 2022
VulBERTa: Simplified Source Code Pre-Training for Vulnerability
  Detection
VulBERTa: Simplified Source Code Pre-Training for Vulnerability Detection
Hazim Hanif
S. Maffeis
71
100
0
25 May 2022
Text and Code Embeddings by Contrastive Pre-Training
Text and Code Embeddings by Contrastive Pre-Training
Arvind Neelakantan
Tao Xu
Raul Puri
Alec Radford
Jesse Michael Han
...
Tabarak Khan
Toki Sherbakov
Joanne Jang
Peter Welinder
Lilian Weng
SSL
AI4TS
266
432
0
24 Jan 2022
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
  Code Understanding and Generation
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq Joty
Guosheng Lin
256
1,532
0
02 Sep 2021
Unified Pre-training for Program Understanding and Generation
Unified Pre-training for Program Understanding and Generation
Wasi Uddin Ahmad
Saikat Chakraborty
Baishakhi Ray
Kai-Wei Chang
92
760
0
10 Mar 2021
Reverse engineering learned optimizers reveals known and novel
  mechanisms
Reverse engineering learned optimizers reveals known and novel mechanisms
Niru Maheswaranathan
David Sussillo
Luke Metz
Ruoxi Sun
Jascha Narain Sohl-Dickstein
52
22
0
04 Nov 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
311
41,106
0
28 May 2020
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Zhangyin Feng
Daya Guo
Duyu Tang
Nan Duan
Xiaocheng Feng
...
Linjun Shou
Bing Qin
Ting Liu
Daxin Jiang
Ming Zhou
105
2,570
0
19 Feb 2020
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
304
24,160
0
26 Jul 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
540
93,936
0
11 Oct 2018
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
216
129,831
0
12 Jun 2017
1