Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2310.07856
Cited By
Assessing Evaluation Metrics for Neural Test Oracle Generation
11 October 2023
Jiho Shin
Hadi Hemmati
Moshi Wei
Song Wang
ELM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Assessing Evaluation Metrics for Neural Test Oracle Generation"
9 / 9 papers shown
Title
Benchmarking Large Language Models in Retrieval-Augmented Generation
Jiawei Chen
Hongyu Lin
Xianpei Han
Le Sun
3DV
RALM
82
307
0
04 Sep 2023
Out of the BLEU: how should we assess quality of the Code Generation models?
Mikhail Evtikhiev
Egor Bogomolov
Yaroslav Sokolov
T. Bryksin
ALM
78
95
0
05 Aug 2022
ReAssert: Deep Learning for Assert Generation
Robert White
J. Krinke
43
15
0
19 Nov 2020
CodeBLEU: a Method for Automatic Evaluation of Code Synthesis
Shuo Ren
Daya Guo
Shuai Lu
Long Zhou
Shujie Liu
Duyu Tang
Neel Sundaresan
M. Zhou
Ambrosio Blanco
Shuai Ma
ELM
115
542
0
22 Sep 2020
Generating Accurate Assert Statements for Unit Test Cases using Pretrained Transformers
Michele Tufano
Dawn Drain
Alexey Svyatkovskiy
Neel Sundaresan
ViT
62
87
0
11 Sep 2020
Unit Test Case Generation with Transformers and Focal Context
Michele Tufano
Dawn Drain
Alexey Svyatkovskiy
Shao Kun Deng
Neel Sundaresan
ViT
54
189
0
11 Sep 2020
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Zhangyin Feng
Daya Guo
Duyu Tang
Nan Duan
Xiaocheng Feng
...
Linjun Shou
Bing Qin
Ting Liu
Daxin Jiang
Ming Zhou
173
2,672
0
19 Feb 2020
Does BLEU Score Work for Code Migration?
Ngoc M. Tran
H. Tran
S. T. Nguyen
H. Nguyen
Tien N Nguyen
47
62
0
12 Jun 2019
Human vs Automatic Metrics: on the Importance of Correlation Design
Anastasia Shimorina
HAI
61
14
0
29 May 2018
1