Federated Fine-Tuning of Large Language Models: Kahneman-Tversky vs. Direct Preference Optimization

We evaluate Kahneman-Tversky Optimization (KTO) as a fine-tuning method for large language models (LLMs) in federated learning (FL) settings, comparing it against Direct Preference Optimization (DPO). Using Alpaca-7B as the base model, we fine-tune on a realistic dataset under both methods and evaluate performance using MT-Bench-1, Vicuna, and AdvBench benchmarks. Additionally, we introduce a redistributed dataset setup, where only KTO is applicable due to its ability to handle single-response feedback, unlike DPO's reliance on paired responses. Our results demonstrate that KTO, in both its original (KTOO) and redistributed (KTOR) configurations, consistently outperforms DPO across all benchmarks. In the redistributed setup, KTO further validates its flexibility and resilience by maintaining superior performance in scenarios where DPO cannot be applied. These findings establish KTO as a robust and scalable fine-tuning method for FL, motivating its adoption for privacy-preserving, decentralized, and heterogeneous environments.
View on arXiv@article{spadea2025_2502.14187, title={ Federated Fine-Tuning of Large Language Models: Kahneman-Tversky vs. Direct Preference Optimization }, author={ Fernando Spadea and Oshani Seneviratne }, journal={arXiv preprint arXiv:2502.14187}, year={ 2025 } }