ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2406.11087
89
1

DP-MemArc: Differential Privacy Transfer Learning for Memory Efficient Language Models

21 February 2025
Yanming Liu
Xinyue Peng
Yuwei Zhang
Xiaolan Ke
Songhang Deng
Jiannan Cao
Chen Ma
Mengchen Fu
Sheng Cheng
Xun Wang
Jianwei Yin
Tianyu Du
Xuhong Zhang
ArXivPDFHTML
Abstract

Large language models have repeatedly shown outstanding performance across diverse applications. However, deploying these models can inadvertently risk user privacy. The significant memory demands during training pose a major challenge in terms of resource consumption. This substantial size places a heavy load on memory resources, raising considerable practical concerns. In this paper, we introduce DP-MemArc, a novel training framework aimed at reducing the memory costs of large language models while emphasizing the protection of user data privacy. DP-MemArc incorporates side network or reversible network designs to support a variety of differential privacy memory-efficient fine-tuning schemes. Our approach not only achieves about 2.5 times in memory optimization but also ensures robust privacy protection, keeping user data secure and confidential. Extensive experiments have demonstrated that DP-MemArc effectively provides differential privacy-efficient fine-tuning across different task scenarios.

View on arXiv
@article{liu2025_2406.11087,
  title={ DP-MemArc: Differential Privacy Transfer Learning for Memory Efficient Language Models },
  author={ Yanming Liu and Xinyue Peng and Yuwei Zhang and Xiaolan Ke and Songhang Deng and Jiannan Cao and Chen Ma and Mengchen Fu and Tianyu Du and Sheng Cheng and Xun Wang and Jianwei Yin and Xuhong Zhang },
  journal={arXiv preprint arXiv:2406.11087},
  year={ 2025 }
}
Comments on this paper