ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.16414
36
1

TabGen-ICL: Residual-Aware In-Context Example Selection for Tabular Data Generation

23 February 2025
Liancheng Fang
Aiwei Liu
Hengrui Zhang
Henry Peng Zou
Weizhi Zhang
Philip S. Yu
    LMTD
ArXivPDFHTML
Abstract

Large Language models (LLMs) have achieved encouraging results in tabular data generation. However, existing approaches require fine-tuning, which is computationally expensive. This paper explores an alternative: prompting a fixed LLM with in-context examples. We observe that using randomly selected in-context examples hampers the LLM's performance, resulting in sub-optimal generation quality. To address this, we propose a novel in-context learning framework: TabGen-ICL, to enhance the in-context learning ability of LLMs for tabular data generation. TabGen-ICL operates iteratively, retrieving a subset of real samples that represent the residual between currently generated samples and true data distributions. This approach serves two purposes: locally, it provides more effective in-context learning examples for the LLM in each iteration; globally, it progressively narrows the gap between generated and real data. Extensive experiments on five real-world tabular datasets demonstrate that TabGen-ICL significantly outperforms the random selection strategy. Specifically, it reduces the error rate by a margin of 3.5%−42.2%3.5\%-42.2\%3.5%−42.2% on fidelity metrics. We demonstrate for the first time that prompting a fixed LLM can yield high-quality synthetic tabular data. The code is provided in the \href{this https URL}{link}.

View on arXiv
@article{fang2025_2502.16414,
  title={ TabGen-ICL: Residual-Aware In-Context Example Selection for Tabular Data Generation },
  author={ Liancheng Fang and Aiwei Liu and Hengrui Zhang and Henry Peng Zou and Weizhi Zhang and Philip S. Yu },
  journal={arXiv preprint arXiv:2502.16414},
  year={ 2025 }
}
Comments on this paper