APT: Improving Specialist LLM Performance with Weakness Case Acquisition and Iterative Preference Training

Large Language Models (LLMs) often require domain-specific fine-tuning to address targeted tasks, which risks degrading their general capabilities. Maintaining a balance between domain-specific enhancements and general model utility is a key challenge. This paper proposes a novel approach named APT (Weakness Case Acquisition and Iterative Preference Training) to enhance domain-specific performance with self-generated dis-preferred weakness data (bad cases and similar cases). APT uniquely focuses on training the model using only those samples where errors occur, alongside a small, similar set of samples retrieved for this purpose. This targeted training minimizes interference with the model's existing knowledge base, effectively retaining generic capabilities. Experimental results on the LLama-2 and Mistral-V0.3 models across various benchmarks demonstrate that APT ensures no reduction in generic capacity and achieves superior performance on downstream tasks compared to various existing methods. This validates our method as an effective strategy for enhancing domain-specific capabilities without sacrificing the model's broader applicability.
View on arXiv@article{rao2025_2506.03483, title={ APT: Improving Specialist LLM Performance with Weakness Case Acquisition and Iterative Preference Training }, author={ Jun Rao and Zepeng Lin and Xuebo Liu and Xiaopeng Ke and Lian Lian and Dong Jin and Shengjun Cheng and Jun Yu and Min Zhang }, journal={arXiv preprint arXiv:2506.03483}, year={ 2025 } }