ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.09615
20
0

On Retrieval Augmentation and the Limitations of Language Model Training

16 November 2023
Ting-Rui Chiang
Xinyan Velocity Yu
Joshua Robinson
Ollie Liu
Isabelle G. Lee
Dani Yogatama
    RALM
ArXivPDFHTML
Abstract

Augmenting a language model (LM) with kkk-nearest neighbors (kkkNN) retrieval on its training data alone can decrease its perplexity, though the underlying reasons for this remain elusive. In this work, we rule out one previously posited possibility -- the "softmax bottleneck." We then create a new dataset to evaluate LM generalization ability in the setting where training data contains additional information that is not causally relevant. This task is challenging even for GPT-3.5 Turbo. We show that, for both GPT-2 and Mistral 7B, kkkNN retrieval augmentation consistently improves performance in this setting. Finally, to make kkkNN retrieval more accessible, we propose using a multi-layer perceptron model that maps datastore keys to values as a drop-in replacement for traditional retrieval. This reduces storage costs by over 25x.

View on arXiv
Comments on this paper