ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.18147
64
0

NoEsis: Differentially Private Knowledge Transfer in Modular LLM Adaptation

25 April 2025
Rob Romijnders
Stefanos Laskaridis
Ali Shahin Shamsabadi
Hamed Haddadi
ArXivPDFHTML
Abstract

Large Language Models (LLM) are typically trained on vast amounts of data from various sources. Even when designed modularly (e.g., Mixture-of-Experts), LLMs can leak privacy on their sources. Conversely, training such models in isolation arguably prohibits generalization. To this end, we propose a framework, NoEsis, which builds upon the desired properties of modularity, privacy, and knowledge transfer. NoEsis integrates differential privacy with a hybrid two-staged parameter-efficient fine-tuning that combines domain-specific low-rank adapters, acting as experts, with common prompt tokens, acting as a knowledge-sharing backbone. Results from our evaluation on CodeXGLUE showcase that NoEsis can achieve provable privacy guarantees with tangible knowledge transfer across domains, and empirically show protection against Membership Inference Attacks. Finally, on code completion tasks, NoEsis bridges at least 77% of the accuracy gap between the non-shared and the non-private baseline.

View on arXiv
@article{romijnders2025_2504.18147,
  title={ NoEsis: Differentially Private Knowledge Transfer in Modular LLM Adaptation },
  author={ Rob Romijnders and Stefanos Laskaridis and Ali Shahin Shamsabadi and Hamed Haddadi },
  journal={arXiv preprint arXiv:2504.18147},
  year={ 2025 }
}
Comments on this paper