19

Enhancing TableQA through Verifiable Reasoning Trace Reward

Tung Sum Thomas Kwok
Xinyu Wang
Hengzhi He
Xiaofeng Lin
Peng Lu
Liheng Ma
Chunhe Wang
Ying Nian Wu
Lei Ding
Guang Cheng
Main:8 Pages
9 Figures
Bibliography:5 Pages
13 Tables
Appendix:12 Pages
Abstract

A major challenge in training TableQA agents, compared to standard text- and image-based agents, is that answers cannot be inferred from a static input but must be reasoned through stepwise transformations of the table state, introducing multi-step reasoning complexity and environmental interaction. This leads to a research question: Can explicit feedback on table transformation action improve model reasoning capability? In this work, we introduce RE-Tab, a plug-and-play framework that architecturally enhances trajectory search via lightweight, training-free reward modeling by formulating the problem as a Partially Observable Markov Decision Process. We demonstrate that providing explicit verifiable rewards during State Transition (``What is the best action?'') and Simulative Reasoning (``Am I sure about the output?'') is crucial to steer the agent's navigation in table states. By enforcing stepwise reasoning with reward feedback in table transformations, RE-Tab achieves state-of-the-art performance in TableQA with almost 25\% drop in inference cost. Furthermore, a direct plug-and-play implementation of RE-Tab brings up to 41.77% improvement in QA accuracy and 33.33% drop in test-time inference samples for consistent answer. Consistent improvement pattern across various LLMs and state-of-the-art benchmarks further confirms RE-Tab's generalisability. The repository is available atthis https URL.

View on arXiv
Comments on this paper