45
0

Get Experience from Practice: LLM Agents with Record & Replay

Abstract

AI agents, empowered by Large Language Models (LLMs) and communication protocols such as MCP and A2A, have rapidly evolved from simple chatbots to autonomous entities capable of executing complex, multi-step tasks, demonstrating great potential. However, the LLMs' inherent uncertainty and heavy computational resource requirements pose four significant challenges to the development of safe and efficient agents: reliability, privacy, cost and performance. Existing approaches, like model alignment, workflow constraints and on-device model deployment, can partially alleviate some issues but often with limitations, failing to fundamentally resolve these challenges.This paper proposes a new paradigm called AgentRR (Agent Record & Replay), which introduces the classical record-and-replay mechanism into AI agent frameworks. The core idea is to: 1. Record an agent's interaction trace with its environment and internal decision process during task execution, 2. Summarize this trace into a structured "experience" encapsulating the workflow and constraints, and 3. Replay these experiences in subsequent similar tasks to guide the agent's behavior. We detail a multi-level experience abstraction method and a check function mechanism in AgentRR: the former balances experience specificity and generality, while the latter serves as a trust anchor to ensure completeness and safety during replay. In addition, we explore multiple application modes of AgentRR, including user-recorded task demonstration, large-small model collaboration and privacy-aware agent execution, and envision an experience repository for sharing and reusing knowledge to further reduce deployment cost.

View on arXiv
@article{feng2025_2505.17716,
  title={ Get Experience from Practice: LLM Agents with Record & Replay },
  author={ Erhu Feng and Wenbo Zhou and Zibin Liu and Le Chen and Yunpeng Dong and Cheng Zhang and Yisheng Zhao and Dong Du and Zhichao Hua and Yubin Xia and Haibo Chen },
  journal={arXiv preprint arXiv:2505.17716},
  year={ 2025 }
}
Comments on this paper