ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.23651
38
0

Merge-Friendly Post-Training Quantization for Multi-Target Domain Adaptation

29 May 2025
Juncheol Shin
Minsang Seok
Seonggon Kim
Eunhyeok Park
    MQMoMe
ArXiv (abs)PDFHTML
Main:8 Pages
3 Figures
Bibliography:2 Pages
3 Tables
Abstract

Model merging has emerged as a powerful technique for combining task-specific weights, achieving superior performance in multi-target domain adaptation. However, when applied to practical scenarios, such as quantized models, new challenges arise. In practical scenarios, quantization is often applied to target-specific data, but this process restricts the domain of interest and introduces discretization effects, making model merging highly non-trivial. In this study, we analyze the impact of quantization on model merging through the lens of error barriers. Leveraging these insights, we propose a novel post-training quantization, HDRQ - Hessian and distant regularizing quantization - that is designed to consider model merging for multi-target domain adaptation. Our approach ensures that the quantization process incurs minimal deviation from the source pre-trained model while flattening the loss surface to facilitate smooth model merging. To our knowledge, this is the first study on this challenge, and extensive experiments confirm its effectiveness.

View on arXiv
@article{shin2025_2505.23651,
  title={ Merge-Friendly Post-Training Quantization for Multi-Target Domain Adaptation },
  author={ Juncheol Shin and Minsang Seok and Seonggon Kim and Eunhyeok Park },
  journal={arXiv preprint arXiv:2505.23651},
  year={ 2025 }
}
Comments on this paper