Evolutionary Perspectives on the Evaluation of LLM-Based AI Agents: A Comprehensive Survey
- LLMAGELM

The advent of large language models (LLMs), such as GPT, Gemini, and DeepSeek, has significantly advanced natural language processing, giving rise to sophisticated chatbots capable of diverse language-related tasks. The transition from these traditional LLM chatbots to more advanced AI agents represents a pivotal evolutionary step. However, existing evaluation frameworks often blur the distinctions between LLM chatbots and AI agents, leading to confusion among researchers selecting appropriate benchmarks. To bridge this gap, this paper introduces a systematic analysis of current evaluation approaches, grounded in an evolutionary perspective. We provide a detailed analytical framework that clearly differentiates AI agents from LLM chatbots along five key aspects: complex environment, multi-source instructor, dynamic feedback, multi-modal perception, and advanced capability. Further, we categorize existing evaluation benchmarks based on external environments driving forces, and resulting advanced internal capabilities. For each category, we delineate relevant evaluation attributes, presented comprehensively in practical reference tables. Finally, we synthesize current trends and outline future evaluation methodologies through four critical lenses: environment, agent, evaluator, and metrics. Our findings offer actionable guidance for researchers, facilitating the informed selection and application of benchmarks in AI agent evaluation, thus fostering continued advancement in this rapidly evolving research domain.
View on arXiv@article{zhu2025_2506.11102, title={ Evolutionary Perspectives on the Evaluation of LLM-Based AI Agents: A Comprehensive Survey }, author={ Jiachen Zhu and Menghui Zhu and Renting Rui and Rong Shan and Congmin Zheng and Bo Chen and Yunjia Xi and Jianghao Lin and Weiwen Liu and Ruiming Tang and Yong Yu and Weinan Zhang }, journal={arXiv preprint arXiv:2506.11102}, year={ 2025 } }