22

Beyond State Consistency: Behavior Consistency in Text-Based World Models

Youling Huang
Guanqiao Chen
Junchi Yao
Lu Wang
Fangkai Yang
Chao Du
ChenZhuo Zhao
Pu Zhao
Qingwei Lin
Saravan Rajmohan
Dongmei Zhang
Main:8 Pages
6 Figures
Bibliography:2 Pages
22 Tables
Appendix:10 Pages
Abstract

World models have been emerging as critical components for assessing the consequences of actions generated by interactive agents in online planning and offline evaluation. In text-based environments, world models are typically evaluated and trained with single-step metrics such as Exact Match, aiming to improve the similarity between predicted and real-world states, but such metrics have been shown to be insufficient for capturing actual agent behavior. To address this issue, we introduce a new behavior-aligned training paradigm aimed at improving the functional consistency between the world model and the real environment. This paradigm focuses on optimizing a tractable step-level metric named Behavior Consistency Reward (BehR), which measures how much the likelihood of a logged next action changes between the real state and the world-model-predicted state under a frozen Reference Agent. Experiments on WebShop and TextWorld show that BehR-based training improves long-term alignment in several settings, with the clearest gains in WebShop and less movement in near-ceiling regimes, while preserving or improving single-step prediction quality in three of four settings. World models trained with BehR also achieve lower false positives in offline surrogate evaluation and show modest but encouraging gains in inference-time lookahead planning.

View on arXiv
Comments on this paper