ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.17193
  4. Cited By
Uncovering the Limits of Machine Learning for Automatic Vulnerability
  Detection

Uncovering the Limits of Machine Learning for Automatic Vulnerability Detection

28 June 2023
Niklas Risse
Marcel Bohme
    AAML
ArXivPDFHTML

Papers citing "Uncovering the Limits of Machine Learning for Automatic Vulnerability Detection"

16 / 16 papers shown
Title
Let the Trial Begin: A Mock-Court Approach to Vulnerability Detection using LLM-Based Agents
Let the Trial Begin: A Mock-Court Approach to Vulnerability Detection using LLM-Based Agents
Ratnadira Widyasari
Martin Weyssow
Ivana Clairine Irsan
Han Wei Ang
Frank Liauw
Eng Lieh Ouh
Lwin Khin Shar
Hong Jin Kang
David Lo
LLMAG
19
0
0
16 May 2025
Poster: Machine Learning for Vulnerability Detection as Target Oracle in Automated Fuzz Driver Generation
Poster: Machine Learning for Vulnerability Detection as Target Oracle in Automated Fuzz Driver Generation
Gianpietro Castiglione
Marcello Maugeri
G. Bella
30
0
0
02 May 2025
Trace Gadgets: Minimizing Code Context for Machine Learning-Based Vulnerability Prediction
Trace Gadgets: Minimizing Code Context for Machine Learning-Based Vulnerability Prediction
Felix Mächtle
Nils Loose
Tim Schulz
Florian Sieck
Jan-Niclas Serr
Ralf Möller
T. Eisenbarth
29
0
0
18 Apr 2025
Semantic-Preserving Transformations as Mutation Operators: A Study on Their Effectiveness in Defect Detection
Semantic-Preserving Transformations as Mutation Operators: A Study on Their Effectiveness in Defect Detection
Max Hort
Linas Vidziunas
Leon Moonen
54
0
0
30 Mar 2025
Vulnerability Detection: From Formal Verification to Large Language Models and Hybrid Approaches: A Comprehensive Overview
Norbert Tihanyi
Tamás Bisztray
M. Ferrag
Bilel Cherif
Richard A. Dubniczky
Ridhi Jain
Lucas C. Cordeiro
40
0
0
13 Mar 2025
Benchmarking LLMs and LLM-based Agents in Practical Vulnerability Detection for Code Repositories
Benchmarking LLMs and LLM-based Agents in Practical Vulnerability Detection for Code Repositories
Alperen Yildiz
Sin G. Teo
Yiling Lou
Yebo Feng
Chong Wang
Dinil M. Divakaran
56
0
0
05 Mar 2025
LessLeak-Bench: A First Investigation of Data Leakage in LLMs Across 83 Software Engineering Benchmarks
LessLeak-Bench: A First Investigation of Data Leakage in LLMs Across 83 Software Engineering Benchmarks
Xin Zhou
Martin Weyssow
Ratnadira Widyasari
Ting Zhang
Junda He
Yunbo Lyu
Jianming Chang
Beiqi Zhang
Dan Huang
David Lo
PILM
297
1
0
10 Feb 2025
How Should We Build A Benchmark? Revisiting 274 Code-Related Benchmarks For LLMs
How Should We Build A Benchmark? Revisiting 274 Code-Related Benchmarks For LLMs
Jialun Cao
Yuk-Kit Chan
Zixuan Ling
Wenxuan Wang
Shuqing Li
...
Pinjia He
Shuai Wang
Zibin Zheng
Michael R. Lyu
Shing-Chi Cheung
ALM
71
1
0
18 Jan 2025
Top Score on the Wrong Exam: On Benchmarking in Machine Learning for Vulnerability Detection
Top Score on the Wrong Exam: On Benchmarking in Machine Learning for Vulnerability Detection
Niklas Risse
Marcel Bohme
Marcel Böhme
51
4
0
23 Aug 2024
An Empirical Study on Capability of Large Language Models in
  Understanding Code Semantics
An Empirical Study on Capability of Large Language Models in Understanding Code Semantics
Thu-Trang Nguyen
Thanh Trong Vu
H. Vo
Son Nguyen
ELM
42
2
0
04 Jul 2024
Vulnerability Detection with Code Language Models: How Far Are We?
Vulnerability Detection with Code Language Models: How Far Are We?
Yangruibo Ding
Yanjun Fu
Omniyyah Ibrahim
Chawin Sitawarin
Xinyun Chen
Basel Alomair
David Wagner
Baishakhi Ray
Yizheng Chen
AAML
49
45
0
27 Mar 2024
Evaluating Program Repair with Semantic-Preserving Transformations: A
  Naturalness Assessment
Evaluating Program Repair with Semantic-Preserving Transformations: A Naturalness Assessment
Thanh Le-Cong
Dat Nguyen
Bach Le
Toby Murray
26
1
0
19 Feb 2024
LLMs Cannot Reliably Identify and Reason About Security Vulnerabilities
  (Yet?): A Comprehensive Evaluation, Framework, and Benchmarks
LLMs Cannot Reliably Identify and Reason About Security Vulnerabilities (Yet?): A Comprehensive Evaluation, Framework, and Benchmarks
Saad Ullah
Mingji Han
Saurabh Pujar
Hammond Pearce
Ayse K. Coskun
Gianluca Stringhini
ELM
LRM
21
61
0
19 Dec 2023
VulBERTa: Simplified Source Code Pre-Training for Vulnerability
  Detection
VulBERTa: Simplified Source Code Pre-Training for Vulnerability Detection
Hazim Hanif
S. Maffeis
66
95
0
25 May 2022
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding
  and Generation
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Shuai Lu
Daya Guo
Shuo Ren
Junjie Huang
Alexey Svyatkovskiy
...
Nan Duan
Neel Sundaresan
Shao Kun Deng
Shengyu Fu
Shujie Liu
ELM
204
853
0
09 Feb 2021
Semantic Robustness of Models of Source Code
Semantic Robustness of Models of Source Code
Goutham Ramakrishnan
Jordan Henkel
Zi Wang
Aws Albarghouthi
S. Jha
Thomas W. Reps
SILM
AAML
41
97
0
07 Feb 2020
1