Recent advances in LLM agents have largely built on reasoning backbones like ReAct, which interleave thought and action in complex environments. However, ReAct often produces ungrounded or incoherent reasoning steps, leading to misalignment between the agent's actual state and goal. Our analysis finds that this stems from ReAct's inability to maintain consistent internal beliefs and goal alignment, causing compounding errors and hallucinations. To address this, we introduce ReflAct, a novel backbone that shifts reasoning from merely planning next actions to continuously reflecting on the agent's state relative to its goal. By explicitly grounding decisions in states and enforcing ongoing goal alignment, ReflAct dramatically improves strategic reliability. This design delivers substantial empirical gains: ReflAct surpasses ReAct by 27.7% on average, achieving a 93.3% success rate in ALFWorld. Notably, ReflAct even outperforms ReAct with added enhancement modules (e.g., Reflexion, WKM), showing that strengthening the core reasoning backbone is key to reliable agent performance.
View on arXiv@article{kim2025_2505.15182, title={ ReflAct: World-Grounded Decision Making in LLM Agents via Goal-State Reflection }, author={ Jeonghye Kim and Sojeong Rhee and Minbeom Kim and Dohyung Kim and Sangmook Lee and Youngchul Sung and Kyomin Jung }, journal={arXiv preprint arXiv:2505.15182}, year={ 2025 } }