ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14761
  4. Cited By
From Bytes to Ideas: Language Modeling with Autoregressive U-Nets

From Bytes to Ideas: Language Modeling with Autoregressive U-Nets

17 June 2025
Mathurin Videau
Badr Youbi Idrissi
Alessandro Leite
Marc Schoenauer
O. Teytaud
David Lopez-Paz
ArXiv (abs)PDFHTML

Papers citing "From Bytes to Ideas: Language Modeling with Autoregressive U-Nets"

4 / 4 papers shown
Title
CUTE: Measuring LLMs' Understanding of Their Tokens
CUTE: Measuring LLMs' Understanding of Their Tokens
Lukas Edman
Helmut Schmid
Alexander Fraser
56
7
0
23 Sep 2024
Training LLMs over Neurally Compressed Text
Training LLMs over Neurally Compressed Text
Brian Lester
Jaehoon Lee
A. Alemi
Jeffrey Pennington
Adam Roberts
Jascha Narain Sohl-Dickstein
Noah Constant
67
7
0
04 Apr 2024
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
MEGABYTE: Predicting Million-byte Sequences with Multiscale Transformers
L. Yu
Daniel Simig
Colin Flaherty
Armen Aghajanyan
Luke Zettlemoyer
M. Lewis
63
93
0
12 May 2023
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
608
4,893
0
23 Jan 2020
1