Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2305.11862
Cited By
Reducing Sequence Length by Predicting Edit Operations with Large Language Models
19 May 2023
Masahiro Kaneko
Naoaki Okazaki
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Reducing Sequence Length by Predicting Edit Operations with Large Language Models"
8 / 8 papers shown
Title
Learning to Adapt to Low-Resource Paraphrase Generation
Zhigen Li
Yanmeng Wang
Rizhao Fan
Ye Wang
Jianfeng Li
Shaojun Wang
124
3
0
22 Dec 2024
A Little Leak Will Sink a Great Ship: Survey of Transparency for Large Language Models from Start to Finish
Masahiro Kaneko
Timothy Baldwin
PILM
28
3
0
24 Mar 2024
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Minghao Wu
Abdul Waheed
Chiyu Zhang
Muhammad Abdul-Mageed
Alham Fikri Aji
ALM
132
119
0
27 Apr 2023
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
319
11,953
0
04 Mar 2022
A Unified Strategy for Multilingual Grammatical Error Correction with Pre-trained Cross-Lingual Language Model
Xin Sun
Tao Ge
Shuming Ma
Jingjing Li
Furu Wei
Houfeng Wang
SyDa
36
26
0
26 Jan 2022
Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer
Huiyuan Lai
Antonio Toral
Malvina Nissim
29
56
0
14 May 2021
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
246
580
0
12 Mar 2020
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
258
4,489
0
23 Jan 2020
1