Towards Human-like Preference Profiling in Sequential Recommendation

Sequential recommendation systems aspire to profile users by interpreting their interaction histories, echoing how humans make decisions by weighing experience, relative preference strength, and situational relevance. Yet, existing large language model (LLM)-based recommenders often fall short of mimicking the flexible, context-aware decision strategies humans exhibit, neglecting the structured, dynamic, and context-aware mechanisms fundamental to human behaviors. To bridge this gap, we propose RecPO, a preference optimization framework that models structured feedback and contextual delay to emulate human-like prioritization in sequential recommendation RecPO exploits adaptive reward margins based on inferred preference hierarchies and temporal signals, enabling the model to favor immediately relevant items and to distinguish between varying degrees of preference and aversion. Extensive experiments across five real-world datasets demonstrate that RecPO not only yields performance gains over state-of-the-art baselines, but also mirrors key characteristics of human decision-making: favoring timely satisfaction, maintaining coherent preferences, and exercising discernment under shifting contexts.
View on arXiv@article{ouyang2025_2506.02261, title={ Towards Human-like Preference Profiling in Sequential Recommendation }, author={ Zhongyu Ouyang and Qianlong Wen and Chunhui Zhang and Yanfang Ye and Soroush Vosoughi }, journal={arXiv preprint arXiv:2506.02261}, year={ 2025 } }