ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.01642
  4. Cited By
Recommending Metamodel Concepts during Modeling Activities with
  Pre-Trained Language Models
v1v2v3 (latest)

Recommending Metamodel Concepts during Modeling Activities with Pre-Trained Language Models

4 April 2021
Martin Weyssow
H. Sahraoui
Eugene Syriani
ArXiv (abs)PDFHTML

Papers citing "Recommending Metamodel Concepts during Modeling Activities with Pre-Trained Language Models"

12 / 12 papers shown
Title
Language Models are Few-Shot Learners
Language Models are Few-Shot Learners
Tom B. Brown
Benjamin Mann
Nick Ryder
Melanie Subbiah
Jared Kaplan
...
Christopher Berner
Sam McCandlish
Alec Radford
Ilya Sutskever
Dario Amodei
BDL
853
42,332
0
28 May 2020
Fast and Memory-Efficient Neural Code Completion
Fast and Memory-Efficient Neural Code Completion
Alexey Svyatkovskiy
Sebastian Lee
A. Hadjitofi
M. Riechert
Juliana Franco
Miltiadis Allamanis
35
91
0
28 Apr 2020
SCELMo: Source Code Embeddings from Language Models
SCELMo: Source Code Embeddings from Language Models
Rafael-Michael Karampatsis
Charles Sutton
59
52
0
28 Apr 2020
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
Zhangyin Feng
Daya Guo
Duyu Tang
Nan Duan
Xiaocheng Feng
...
Linjun Shou
Bing Qin
Ting Liu
Daxin Jiang
Ming Zhou
165
2,667
0
19 Feb 2020
Learning and Evaluating Contextual Embedding of Source Code
Learning and Evaluating Contextual Embedding of Source Code
Aditya Kanade
Petros Maniatis
Gogul Balakrishnan
Kensen Shi
ELM
78
77
0
21 Dec 2019
RoBERTa: A Robustly Optimized BERT Pretraining Approach
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu
Myle Ott
Naman Goyal
Jingfei Du
Mandar Joshi
Danqi Chen
Omer Levy
M. Lewis
Luke Zettlemoyer
Veselin Stoyanov
AIMat
674
24,541
0
26 Jul 2019
XLNet: Generalized Autoregressive Pretraining for Language Understanding
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Zhilin Yang
Zihang Dai
Yiming Yang
J. Carbonell
Ruslan Salakhutdinov
Quoc V. Le
AI4CE
234
8,444
0
19 Jun 2019
Cross-lingual Language Model Pretraining
Cross-lingual Language Model Pretraining
Guillaume Lample
Alexis Conneau
104
2,748
0
22 Jan 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLMSSLSSeg
1.8K
95,175
0
11 Oct 2018
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
730
132,363
0
12 Jun 2017
Semi-Supervised Classification with Graph Convolutional Networks
Semi-Supervised Classification with Graph Convolutional Networks
Thomas Kipf
Max Welling
GNNSSL
652
29,154
0
09 Sep 2016
Neural Machine Translation of Rare Words with Subword Units
Neural Machine Translation of Rare Words with Subword Units
Rico Sennrich
Barry Haddow
Alexandra Birch
228
7,755
0
31 Aug 2015
1