ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03444
55
1

Taxation Perspectives from Large Language Models: A Case Study on Additional Tax Penalties

5 March 2025
Eunkyung Choi
Young Jin Suh
H. Park
Wonseok Hwang
ArXivPDFHTML
Abstract

How capable are large language models (LLMs) in the domain of taxation? Although numerous studies have explored the legal domain in general, research dedicated to taxation remain scarce. Moreover, the datasets used in these studies are either simplified, failing to reflect the real-world complexities, or unavailable as open source. To address this gap, we introduce PLAT, a new benchmark designed to assess the ability of LLMs to predict the legitimacy of additional tax penalties. PLAT is constructed to evaluate LLMs' understanding of tax law, particularly in cases where resolving the issue requires more than just applying related statutes. Our experiments with six LLMs reveal that their baseline capabilities are limited, especially when dealing with conflicting issues that demand a comprehensive understanding. However, we found that enabling retrieval, self-reasoning, and discussion among multiple agents with specific role assignments, this limitation can be mitigated.

View on arXiv
@article{choi2025_2503.03444,
  title={ Taxation Perspectives from Large Language Models: A Case Study on Additional Tax Penalties },
  author={ Eunkyung Choi and Young Jin Suh and Hun Park and Wonseok Hwang },
  journal={arXiv preprint arXiv:2503.03444},
  year={ 2025 }
}
Comments on this paper