ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2302.10879
12
14

kkkNN-Adapter: Efficient Domain Adaptation for Black-Box Language Models

21 February 2023
Yangsibo Huang
Daogao Liu
Zexuan Zhong
Weijia Shi
Y. Lee
    RALM
    ALM
ArXivPDFHTML
Abstract

Fine-tuning a language model on a new domain is standard practice for domain adaptation. However, it can be infeasible when it comes to modern large-scale language models such as GPT-3, which can only be accessed through APIs, making it difficult to access the internal parameters of the model. In this paper, we propose kkkNN-Adapter, a method to effectively adapt these black-box large language models (LLMs) to a new domain. The kkkNN-Adapter builds on top of the retrieval-augmented language model, and adaptively learns to interpolate the output of the language model with retrieval results from a datastore consisting of the target domain data. Our experiments on four different domains demonstrate that kkkNN-Adapter significantly improves perplexity, and works particularly well in settings with limited access to LLMs. Additionally, we show that kkkNN-Adapter is more effective than fine-tuning when the amount of training data is limited. We also release a dataset to encourage further study.

View on arXiv
Comments on this paper