Noise-Robustness Through Noise: Asymmetric LoRA Adaption with Poisoning Expert
- AAML

Current parameter-efficient fine-tuning methods for adapting pre-trained language models to downstream tasks are susceptible to interference from noisy data. Conventional noise-handling approaches either rely on laborious data pre-processing or employ model architecture modifications prone to error accumulation. In contrast to existing noise-process paradigms, we propose a noise-robust adaptation method via asymmetric LoRA poisoning experts (LoPE), a novel framework that enhances model robustness to noise only with generated noisy data. Drawing inspiration from the mixture-of-experts architecture, LoPE strategically integrates a dedicated poisoning expert in an asymmetric LoRA configuration. Through a two-stage paradigm, LoPE performs noise injection on the poisoning expert during fine-tuning to enhance its noise discrimination and processing ability. During inference, we selectively mask the dedicated poisoning expert to leverage purified knowledge acquired by normal experts for noise-robust output. Extensive experiments demonstrate that LoPE achieves strong performance and robustness purely through the low-cost noise injection, which completely eliminates the requirement of data cleaning.
View on arXiv@article{wang2025_2505.23868, title={ Noise-Robustness Through Noise: Asymmetric LoRA Adaption with Poisoning Expert }, author={ Zhaokun Wang and Jinyu Guo and Jingwen Pu and Lingfeng Chen and Hongli Pu and Jie Ou and Libo Qin and Wenhong Tian }, journal={arXiv preprint arXiv:2505.23868}, year={ 2025 } }