ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.17103
  4. Cited By
Empowering Character-level Text Infilling by Eliminating Sub-Tokens

Empowering Character-level Text Infilling by Eliminating Sub-Tokens

27 May 2024
Houxing Ren
Mingjie Zhan
Zhongyuan Wu
Hongsheng Li
    AI4CE
ArXivPDFHTML

Papers citing "Empowering Character-level Text Infilling by Eliminating Sub-Tokens"

2 / 2 papers shown
Title
Large Language Models are Zero-Shot Reasoners
Large Language Models are Zero-Shot Reasoners
Takeshi Kojima
S. Gu
Machel Reid
Yutaka Matsuo
Yusuke Iwasawa
ReLM
LRM
328
4,077
0
24 May 2022
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
339
12,003
0
04 Mar 2022
1