Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.13784
Cited By
DaG LLM ver 1.0: Pioneering Instruction-Tuned Language Modeling for Korean NLP
23 November 2023
Dongjun Jang
Sangah Lee
Sungjoo Byun
Jinwoong Kim
Jean Seo
Minseok Kim
Soyeon Kim
Chaeyoung Oh
Jaeyoon Kim
Hyemi Jo
Hyopil Shin
ALM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"DaG LLM ver 1.0: Pioneering Instruction-Tuned Language Modeling for Korean NLP"
3 / 3 papers shown
Title
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
339
12,003
0
04 Mar 2022
Multitask Prompted Training Enables Zero-Shot Task Generalization
Victor Sanh
Albert Webson
Colin Raffel
Stephen H. Bach
Lintang Sutawika
...
T. Bers
Stella Biderman
Leo Gao
Thomas Wolf
Alexander M. Rush
LRM
213
1,661
0
15 Oct 2021
What Changes Can Large-scale Language Models Bring? Intensive Study on HyperCLOVA: Billions-scale Korean Generative Pretrained Transformers
Boseop Kim
Hyoungseok Kim
Sang-Woo Lee
Gichang Lee
Donghyun Kwak
...
Jaewook Kang
Inho Kang
Jung-Woo Ha
W. Park
Nako Sung
VLM
249
121
0
10 Sep 2021
1