The emergence of agentic recommender systems powered by Large Language Models (LLMs) represents a paradigm shift in personalized recommendations, leveraging LLMs' advanced reasoning and role-playing capabilities to enable autonomous, adaptive decision-making. Unlike traditional recommendation approaches, agentic recommender systems can dynamically gather and interpret user-item interactions from complex environments, generating robust recommendation strategies that generalize across diverse scenarios. However, the field currently lacks standardized evaluation protocols to systematically assess these methods. To address this critical gap, we propose: (1) an interactive textual recommendation simulator incorporating rich user and item metadata and three typical evaluation scenarios (classic, evolving-interest, and cold-start recommendation tasks); (2) a unified modular framework for developing and studying agentic recommender systems; and (3) the first comprehensive benchmark comparing 10 classical and agentic recommendation methods. Our findings demonstrate the superiority of agentic systems and establish actionable design guidelines for their core components. The benchmark environment has been rigorously validated through an open challenge and remains publicly available with a continuously maintained leaderboard~\footnote[2]{this https URL}, fostering ongoing community engagement and reproducible research. The benchmark is available at: \hyperlink{this https URL}{this https URL}.
View on arXiv@article{shang2025_2505.19623, title={ AgentRecBench: Benchmarking LLM Agent-based Personalized Recommender Systems }, author={ Yu Shang and Peijie Liu and Yuwei Yan and Zijing Wu and Leheng Sheng and Yuanqing Yu and Chumeng Jiang and An Zhang and Fengli Xu and Yu Wang and Min Zhang and Yong Li }, journal={arXiv preprint arXiv:2505.19623}, year={ 2025 } }