84
0

Multimodal Tabular Reasoning with Privileged Structured Information

Main:9 Pages
7 Figures
Bibliography:5 Pages
5 Tables
Appendix:3 Pages
Abstract

Tabular reasoning involves multi-step information extraction and logical inference over tabular data. While recent advances have leveraged large language models (LLMs) for reasoning over structured tables, such high-quality textual representations are often unavailable in real-world settings, where tables typically appear as images. In this paper, we tackle the task of tabular reasoning from table images, leveraging privileged structured information available during training to enhance multimodal large language models (MLLMs). The key challenges lie in the complexity of accurately aligning structured information with visual representations, and in effectively transferring structured reasoning skills to MLLMs despite the input modality gap. To address these, we introduce TabUlar Reasoning with Bridged infOrmation ({\sc Turbo}), a new framework for multimodal tabular reasoning with privileged structured tables. {\sc Turbo} benefits from a structure-aware reasoning trace generator based on DeepSeek-R1, contributing to high-quality modality-bridged data. On this basis, {\sc Turbo} repeatedly generates and selects the advantageous reasoning paths, further enhancing the model's tabular reasoning ability. Experimental results demonstrate that, with limited (99k) data, {\sc Turbo} achieves state-of-the-art performance (+7.2%+7.2\% vs. previous SOTA) across multiple datasets.

View on arXiv
@article{jiang2025_2506.04088,
  title={ Multimodal Tabular Reasoning with Privileged Structured Information },
  author={ Jun-Peng Jiang and Yu Xia and Hai-Long Sun and Shiyin Lu and Qing-Guo Chen and Weihua Luo and Kaifu Zhang and De-Chuan Zhan and Han-Jia Ye },
  journal={arXiv preprint arXiv:2506.04088},
  year={ 2025 }
}
Comments on this paper