ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.03949
84
0
v1v2 (latest)

TableEval: A Real-World Benchmark for Complex, Multilingual, and Multi-Structured Table Question Answering

4 June 2025
Junnan Zhu
Jingyi Wang
Bohan Yu
Xiaoyu Wu
Junbo Li
Lei Wang
Nan Xu
    LMTD
ArXiv (abs)PDFHTML
Main:7 Pages
14 Figures
Bibliography:3 Pages
15 Tables
Appendix:11 Pages
Abstract

LLMs have shown impressive progress in natural language processing. However, they still face significant challenges in TableQA, where real-world complexities such as diverse table structures, multilingual data, and domain-specific reasoning are crucial. Existing TableQA benchmarks are often limited by their focus on simple flat tables and suffer from data leakage. Furthermore, most benchmarks are monolingual and fail to capture the cross-lingual and cross-domain variability in practical applications. To address these limitations, we introduce TableEval, a new benchmark designed to evaluate LLMs on realistic TableQA tasks. Specifically, TableEval includes tables with various structures (such as concise, hierarchical, and nested tables) collected from four domains (including government, finance, academia, and industry reports). Besides, TableEval features cross-lingual scenarios with tables in Simplified Chinese, Traditional Chinese, and English. To minimize the risk of data leakage, we collect all data from recent real-world documents. Considering that existing TableQA metrics fail to capture semantic accuracy, we further propose SEAT, a new evaluation framework that assesses the alignment between model responses and reference answers at the sub-question level. Experimental results have shown that SEAT achieves high agreement with human judgment. Extensive experiments on TableEval reveal critical gaps in the ability of state-of-the-art LLMs to handle these complex, real-world TableQA tasks, offering insights for future improvements. We make our dataset available here:this https URL.

View on arXiv
@article{zhu2025_2506.03949,
  title={ TableEval: A Real-World Benchmark for Complex, Multilingual, and Multi-Structured Table Question Answering },
  author={ Junnan Zhu and Jingyi Wang and Bohan Yu and Xiaoyu Wu and Junbo Li and Lei Wang and Nan Xu },
  journal={arXiv preprint arXiv:2506.03949},
  year={ 2025 }
}
Comments on this paper