ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.11862
  4. Cited By
Reducing Sequence Length by Predicting Edit Operations with Large
  Language Models

Reducing Sequence Length by Predicting Edit Operations with Large Language Models

19 May 2023
Masahiro Kaneko
Naoaki Okazaki
ArXivPDFHTML

Papers citing "Reducing Sequence Length by Predicting Edit Operations with Large Language Models"

8 / 8 papers shown
Title
Learning to Adapt to Low-Resource Paraphrase Generation
Learning to Adapt to Low-Resource Paraphrase Generation
Zhigen Li
Yanmeng Wang
Rizhao Fan
Ye Wang
Jianfeng Li
Shaojun Wang
124
3
0
22 Dec 2024
A Little Leak Will Sink a Great Ship: Survey of Transparency for Large
  Language Models from Start to Finish
A Little Leak Will Sink a Great Ship: Survey of Transparency for Large Language Models from Start to Finish
Masahiro Kaneko
Timothy Baldwin
PILM
28
3
0
24 Mar 2024
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale
  Instructions
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions
Minghao Wu
Abdul Waheed
Chiyu Zhang
Muhammad Abdul-Mageed
Alham Fikri Aji
ALM
135
119
0
27 Apr 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
333
12,003
0
04 Mar 2022
A Unified Strategy for Multilingual Grammatical Error Correction with
  Pre-trained Cross-Lingual Language Model
A Unified Strategy for Multilingual Grammatical Error Correction with Pre-trained Cross-Lingual Language Model
Xin Sun
Tao Ge
Shuming Ma
Jingjing Li
Furu Wei
Houfeng Wang
SyDa
36
26
0
26 Jan 2022
Thank you BART! Rewarding Pre-Trained Models Improves Formality Style
  Transfer
Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer
Huiyuan Lai
Antonio Toral
Malvina Nissim
29
56
0
14 May 2021
Efficient Content-Based Sparse Attention with Routing Transformers
Efficient Content-Based Sparse Attention with Routing Transformers
Aurko Roy
M. Saffar
Ashish Vaswani
David Grangier
MoE
252
580
0
12 Mar 2020
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
264
4,489
0
23 Jan 2020
1