Expectation Confirmation Preference Optimization for Multi-Turn Conversational Recommendation Agent

Recent advancements in Large Language Models (LLMs) have significantly propelled the development of Conversational Recommendation Agents (CRAs). However, these agents often generate short-sighted responses that fail to sustain user guidance and meet expectations. Although preference optimization has proven effective in aligning LLMs with user expectations, it remains costly and performs poorly in multi-turn dialogue. To address this challenge, we introduce a novel multi-turn preference optimization (MTPO) paradigm ECPO, which leverages Expectation Confirmation Theory to explicitly model the evolution of user satisfaction throughout multi-turn dialogues, uncovering the underlying causes of dissatisfaction. These causes can be utilized to support targeted optimization of unsatisfactory responses, thereby achieving turn-level preference optimization. ECPO ingeniously eliminates the significant sampling overhead of existing MTPO methods while ensuring the optimization process drives meaningful improvements. To support ECPO, we introduce an LLM-based user simulator, AILO, to simulate user feedback and perform expectation confirmation during conversational recommendations. Experimental results show that ECPO significantly enhances CRA's interaction capabilities, delivering notable improvements in both efficiency and effectiveness over existing MTPO methods.
View on arXiv@article{feng2025_2506.14302, title={ Expectation Confirmation Preference Optimization for Multi-Turn Conversational Recommendation Agent }, author={ Xueyang Feng and Jingsen Zhang and Jiakai Tang and Wei Li and Guohao Cai and Xu Chen and Quanyu Dai and Yue Zhu and Zhenhua Dong }, journal={arXiv preprint arXiv:2506.14302}, year={ 2025 } }