ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.08417
50
3

AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees

12 April 2024
William Fleshman
Aleem Khan
Marc Marone
Benjamin Van Durme
    CLL
    KELM
ArXivPDFHTML
Abstract

Large language models (LLMs) are increasingly capable of completing knowledge intensive tasks by recalling information from a static pretraining corpus. Here we are concerned with LLMs in the context of evolving data requirements. For instance: batches of new data that are introduced periodically; subsets of data with user-based access controls; or requirements on dynamic removal of documents with guarantees that associated knowledge cannot be recalled. We wish to satisfy these requirements while at the same time ensuring a model does not forget old information when new data becomes available. To address these issues, we introduce AdapterSwap, a training and inference scheme that organizes knowledge from a data collection into a set of low-rank adapters, which are dynamically composed during inference. Our experiments demonstrate AdapterSwap's ability to support efficient continual learning, while also enabling organizations to have fine-grained control over data access and deletion.

View on arXiv
@article{fleshman2025_2404.08417,
  title={ AdapterSwap: Continuous Training of LLMs with Data Removal and Access-Control Guarantees },
  author={ William Fleshman and Aleem Khan and Marc Marone and Benjamin Van Durme },
  journal={arXiv preprint arXiv:2404.08417},
  year={ 2025 }
}
Comments on this paper