139
1

TableLoRA: Low-rank Adaptation on Table Structure Understanding for Large Language Models

Main:9 Pages
7 Figures
Bibliography:5 Pages
16 Tables
Appendix:4 Pages
Abstract

Tabular data are crucial in many fields and their understanding by large language models (LLMs) under high parameter efficiency paradigm is important. However, directly applying parameter-efficient fine-tuning (PEFT) techniques to tabular tasks presents significant challenges, particularly in terms of better table serialization and the representation of two-dimensional structured information within a one-dimensional sequence. To address this, we propose TableLoRA, a module designed to improve LLMs' understanding of table structure during PEFT. It incorporates special tokens for serializing tables with special token encoder and uses 2D LoRA to encode low-rank information on cell positions. Experiments on four tabular-related datasets demonstrate that TableLoRA consistently outperforms vanilla LoRA and surpasses various table encoding methods tested in control experiments. These findings reveal that TableLoRA, as a table-specific LoRA, enhances the ability of LLMs to process tabular data effectively, especially in low-parameter settings, demonstrating its potential as a robust solution for handling table-related tasks.

View on arXiv
@article{he2025_2503.04396,
  title={ TableLoRA: Low-rank Adaptation on Table Structure Understanding for Large Language Models },
  author={ Xinyi He and Yihao Liu and Mengyu Zhou and Yeye He and Haoyu Dong and Shi Han and Zejian Yuan and Dongmei Zhang },
  journal={arXiv preprint arXiv:2503.04396},
  year={ 2025 }
}
Comments on this paper