ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2304.13350
  4. Cited By
Neuro-symbolic Zero-Shot Code Cloning with Cross-Language Intermediate
  Representation

Neuro-symbolic Zero-Shot Code Cloning with Cross-Language Intermediate Representation

26 April 2023
Krishnam Hasija
Shrishti Pradhan
Manasi Patwardhan
Raveendra Kumar Medicherla
Lovekesh Vig
Ravindra Naik
ArXiv (abs)PDFHTML

Papers citing "Neuro-symbolic Zero-Shot Code Cloning with Cross-Language Intermediate Representation"

26 / 26 papers shown
Title
Towards Reasoning in Large Language Models: A Survey
Towards Reasoning in Large Language Models: A Survey
Jie Huang
Kevin Chen-Chuan Chang
LM&MAELMLRM
150
640
0
20 Dec 2022
On the Compositional Generalization Gap of In-Context Learning
On the Compositional Generalization Gap of In-Context Learning
Arian Hosseini
Ankit Vani
Dzmitry Bahdanau
Alessandro Sordoni
Rameswar Panda
58
25
0
15 Nov 2022
Evaluating the Impact of Model Scale for Compositional Generalization in
  Semantic Parsing
Evaluating the Impact of Model Scale for Compositional Generalization in Semantic Parsing
Linlu Qiu
Peter Shaw
Panupong Pasupat
Tianze Shi
Jonathan Herzig
Emily Pitler
Fei Sha
Kristina Toutanova
AI4CELRM
121
54
0
24 May 2022
CODE-MVP: Learning to Represent Source Code from Multiple Views with
  Contrastive Pre-Training
CODE-MVP: Learning to Represent Source Code from Multiple Views with Contrastive Pre-Training
Xin Wang
Yasheng Wang
Yao Wan
Jiawei Wang
Pingyi Zhou
Li Li
Hao Wu
Jin Liu
55
36
0
04 May 2022
GypSum: Learning Hybrid Representations for Code Summarization
GypSum: Learning Hybrid Representations for Code Summarization
Yu Wang
Yu Dong
Xuesong Lu
Aoying Zhou
52
26
0
26 Apr 2022
PaLM: Scaling Language Modeling with Pathways
PaLM: Scaling Language Modeling with Pathways
Aakanksha Chowdhery
Sharan Narang
Jacob Devlin
Maarten Bosma
Gaurav Mishra
...
Kathy Meier-Hellstern
Douglas Eck
J. Dean
Slav Petrov
Noah Fiedel
PILMLRM
529
6,293
0
05 Apr 2022
UniXcoder: Unified Cross-Modal Pre-training for Code Representation
UniXcoder: Unified Cross-Modal Pre-training for Code Representation
Daya Guo
Shuai Lu
Nan Duan
Yanlin Wang
Ming Zhou
Jian Yin
89
589
0
08 Mar 2022
Towards Learning (Dis)-Similarity of Source Code from Program Contrasts
Towards Learning (Dis)-Similarity of Source Code from Program Contrasts
Yangruibo Ding
Luca Buratti
Saurabh Pujar
Alessandro Morari
Baishakhi Ray
Saikat Chakraborty
53
36
0
08 Oct 2021
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for
  Code Understanding and Generation
CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation
Yue Wang
Weishi Wang
Shafiq Joty
Guosheng Lin
292
1,560
0
02 Sep 2021
SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code
  Representation
SynCoBERT: Syntax-Guided Multi-Modal Contrastive Pre-Training for Code Representation
Xin Wang
Yasheng Wang
Fei Mi
Pingyi Zhou
Yao Wan
Xiao Liu
Li Li
Hao Wu
Jin Liu
Xin Jiang
113
115
0
10 Aug 2021
Evaluating Large Language Models Trained on Code
Evaluating Large Language Models Trained on Code
Mark Chen
Jerry Tworek
Heewoo Jun
Qiming Yuan
Henrique Pondé
...
Bob McGrew
Dario Amodei
Sam McCandlish
Ilya Sutskever
Wojciech Zaremba
ELMALM
236
5,647
0
07 Jul 2021
TreeBERT: A Tree-Based Pre-Trained Model for Programming Language
TreeBERT: A Tree-Based Pre-Trained Model for Programming Language
Xue Jiang
Zhuoran Zheng
Chen Lyu
Liang Li
Lei Lyu
59
91
0
26 May 2021
ProphetNet-X: Large-Scale Pre-training Models for English, Chinese,
  Multi-lingual, Dialog, and Code Generation
ProphetNet-X: Large-Scale Pre-training Models for English, Chinese, Multi-lingual, Dialog, and Code Generation
Weizhen Qi
Yeyun Gong
Yu Yan
Can Xu
Bolun Yao
...
Daxin Jiang
Jiusheng Chen
Ruofei Zhang
Houqiang Li
Nan Duan
63
52
0
16 Apr 2021
Language-Agnostic Representation Learning of Source Code from Structure
  and Context
Language-Agnostic Representation Learning of Source Code from Structure and Context
Daniel Zügner
Tobias Kirschstein
Michele Catasta
J. Leskovec
Stephan Günnemann
58
121
0
21 Mar 2021
Unified Pre-training for Program Understanding and Generation
Unified Pre-training for Program Understanding and Generation
Wasi Uddin Ahmad
Saikat Chakraborty
Baishakhi Ray
Kai-Wei Chang
135
769
0
10 Mar 2021
GraphCodeBERT: Pre-training Code Representations with Data Flow
GraphCodeBERT: Pre-training Code Representations with Data Flow
Daya Guo
Shuo Ren
Shuai Lu
Zhangyin Feng
Duyu Tang
...
Dawn Drain
Neel Sundaresan
Jian Yin
Daxin Jiang
M. Zhou
150
1,144
0
17 Sep 2020
Self-Supervised Contrastive Learning for Code Retrieval and
  Summarization via Semantic-Preserving Transformations
Self-Supervised Contrastive Learning for Code Retrieval and Summarization via Semantic-Preserving Transformations
Nghi D. Q. Bui
Yijun Yu
Lingxiao Jiang
SSL
64
120
0
06 Sep 2020
Contrastive Code Representation Learning
Contrastive Code Representation Learning
Paras Jain
Ajay Jain
Tianjun Zhang
Pieter Abbeel
Joseph E. Gonzalez
Ion Stoica
SSLDRL
97
151
0
09 Jul 2020
MISIM: A Neural Code Semantics Similarity System Using the Context-Aware
  Semantics Structure
MISIM: A Neural Code Semantics Similarity System Using the Context-Aware Semantics Structure
Fangke Ye
Sheng-Tian Zhou
Anand Venkat
Ryan Marcus
Nesime Tatbul
...
Tim Mattson
Tim Kraska
Pradeep Dubey
Vivek Sarkar
Justin Emile Gottschlich
48
8
0
05 Jun 2020
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
880
42,379
0
28 May 2020
A Transformer-based Approach for Source Code Summarization
A Transformer-based Approach for Source Code Summarization
Wasi Uddin Ahmad
Saikat Chakraborty
Baishakhi Ray
Kai-Wei Chang
ViT
99
386
0
01 May 2020
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Zhangyin Feng
Daya Guo
Duyu Tang
Nan Duan
Xiaocheng Feng
...
Linjun Shou
Bing Qin
Ting Liu
Daxin Jiang
Ming Zhou
165
2,667
0
19 Feb 2020
Exploring the Limits of Transfer Learning with a Unified Text-to-Text
  Transformer
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Colin Raffel
Noam M. Shazeer
Adam Roberts
Katherine Lee
Sharan Narang
Michael Matena
Yanqi Zhou
Wei Li
Peter J. Liu
AIMat
485
20,317
0
23 Oct 2019
CodeSearchNet Challenge: Evaluating the State of Semantic Code Search
CodeSearchNet Challenge: Evaluating the State of Semantic Code Search
Hamel Husain
Hongqiu Wu
Tiferet Gazit
Miltiadis Allamanis
Marc Brockschmidt
ELM
130
1,086
0
20 Sep 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
677
24,541
0
26 Jul 2019
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
786
132,363
0
12 Jun 2017
1