3

Large Language Model Agents Are Not Always Faithful Self-Evolvers

Weixiang Zhao
Yingshuo Wang
Yichen Zhang
Yang Deng
Yanyan Zhao
Wanxiang Che
Bing Qin
Ting Liu
Main:8 Pages
16 Figures
Bibliography:4 Pages
7 Tables
Appendix:13 Pages
Abstract

Self-evolving large language model (LLM) agents continually improve by accumulating and reusing past experience, yet it remains unclear whether they faithfully rely on that experience to guide their behavior. We present the first systematic investigation of experience faithfulness, the causal dependence of an agent's decisions on the experience it is given, in self-evolving LLM agents. Using controlled causal interventions on both raw and condensed forms of experience, we comprehensively evaluate four representative frameworks across 10 LLM backbones and 9 environments. Our analysis uncovers a striking asymmetry: while agents consistently depend on raw experience, they often disregard or misinterpret condensed experience, even when it is the only experience provided. This gap persists across single- and multi-agent configurations and across backbone scales. We trace its underlying causes to three factors: the semantic limitations of condensed content, internal processing biases that suppress experience, and task regimes where pretrained priors already suffice. These findings challenge prevailing assumptions about self-evolving methods and underscore the need for more faithful and reliable approaches to experience integration.

View on arXiv
Comments on this paper