ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.07804
  4. Cited By
Pop Quiz! Do Pre-trained Code Models Possess Knowledge of Correct API
  Names?

Pop Quiz! Do Pre-trained Code Models Possess Knowledge of Correct API Names?

14 September 2023
Terry Yue Zhuo
Xiaoning Du
Zhenchang Xing
Jiamou Sun
Haowei Quan
Li Li
Liming Zhu
ArXivPDFHTML

Papers citing "Pop Quiz! Do Pre-trained Code Models Possess Knowledge of Correct API Names?"

12 / 12 papers shown
Title
Deep Learning Meets Software Engineering: A Survey on Pre-Trained Models
  of Source Code
Deep Learning Meets Software Engineering: A Survey on Pre-Trained Models of Source Code
Changan Niu
Chuanyi Li
Bin Luo
Vincent Ng
SyDa
VLM
88
49
0
24 May 2022
On the Effectiveness of Pretrained Models for API Learning
On the Effectiveness of Pretrained Models for API Learning
M. Hadi
Imam Nur Bani Yusuf
Ferdian Thung
K. Luong
Jiang Lingxiao
Fatemeh H. Fard
David Lo
60
13
0
05 Apr 2022
Probing Pretrained Models of Source Code
Probing Pretrained Models of Source Code
Sergey Troshin
Nadezhda Chirkova
ELM
63
38
0
16 Feb 2022
Quantifying Memorization Across Neural Language Models
Quantifying Memorization Across Neural Language Models
Nicholas Carlini
Daphne Ippolito
Matthew Jagielski
Katherine Lee
Florian Tramèr
Chiyuan Zhang
PILM
89
603
0
15 Feb 2022
Unified Pre-training for Program Understanding and Generation
Unified Pre-training for Program Understanding and Generation
Wasi Uddin Ahmad
Saikat Chakraborty
Baishakhi Ray
Kai-Wei Chang
114
760
0
10 Mar 2021
X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained
  Language Models
X-FACTR: Multilingual Factual Knowledge Retrieval from Pretrained Language Models
Zhengbao Jiang
Antonios Anastasopoulos
Jun Araki
Haibo Ding
Graham Neubig
HILM
KELM
55
143
0
13 Oct 2020
GraphCodeBERT: Pre-training Code Representations with Data Flow
GraphCodeBERT: Pre-training Code Representations with Data Flow
Daya Guo
Shuo Ren
Shuai Lu
Zhangyin Feng
Duyu Tang
...
Dawn Drain
Neel Sundaresan
Jian Yin
Daxin Jiang
M. Zhou
130
1,111
0
17 Sep 2020
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Zhangyin Feng
Daya Guo
Duyu Tang
Nan Duan
Xiaocheng Feng
...
Linjun Shou
Bing Qin
Ting Liu
Daxin Jiang
Ming Zhou
134
2,588
0
19 Feb 2020
How Can We Know What Language Models Know?
How Can We Know What Language Models Know?
Zhengbao Jiang
Frank F. Xu
Jun Araki
Graham Neubig
KELM
103
1,396
0
28 Nov 2019
Language Models as Knowledge Bases?
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
545
2,639
0
03 Sep 2019
A Survey of Machine Learning for Big Code and Naturalness
A Survey of Machine Learning for Big Code and Naturalness
Miltiadis Allamanis
Earl T. Barr
Premkumar T. Devanbu
Charles Sutton
97
846
0
18 Sep 2017
Explainable Artificial Intelligence: Understanding, Visualizing and
  Interpreting Deep Learning Models
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
Wojciech Samek
Thomas Wiegand
K. Müller
XAI
VLM
57
1,186
0
28 Aug 2017
1