ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2401.17377
43
50

Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens

30 January 2024
Jiacheng Liu
Sewon Min
Luke Zettlemoyer
Yejin Choi
Hannaneh Hajishirzi
ArXivPDFHTML
Abstract

Are nnn-gram language models still relevant in this era of neural large language models (LLMs)? Our answer is yes, and we showcase their values in both text analysis and improving neural LLMs. This was done by modernizing nnn-gram LMs in two aspects. First, we train them at the same data scale as neural LLMs -- 5 trillion tokens. This is the largest nnn-gram LM ever built. Second, existing nnn-gram LMs use small nnn which hinders their performance; we instead allow nnn to be arbitrarily large, by introducing a new ∞\infty∞-gram LM with backoff. Instead of pre-computing nnn-gram count tables (which would be very expensive), we develop an engine named infini-gram -- powered by suffix arrays -- that can compute ∞\infty∞-gram (as well as nnn-gram with arbitrary nnn) probabilities with millisecond-level latency. The ∞\infty∞-gram framework and infini-gram engine enable us to conduct many novel and interesting analyses of human-written and machine-generated text: we find that the ∞\infty∞-gram LM has fairly high accuracy for next-token prediction (47%), and can complement neural LLMs to greatly reduce their perplexity. When analyzing machine-generated text, we also observe irregularities in the machine--∞\infty∞-gram agreement level with respect to the suffix length, which indicates deficiencies in neural LLM pretraining and the positional embeddings of Transformers.

View on arXiv
@article{liu2025_2401.17377,
  title={ Infini-gram: Scaling Unbounded n-gram Language Models to a Trillion Tokens },
  author={ Jiacheng Liu and Sewon Min and Luke Zettlemoyer and Yejin Choi and Hannaneh Hajishirzi },
  journal={arXiv preprint arXiv:2401.17377},
  year={ 2025 }
}
Comments on this paper