ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.16226
44
0

Realistic Evaluation of TabPFN v2 in Open Environments

22 May 2025
Zi-Jian Cheng
Zi-Yi Jia
Zhi Zhou
Yu-Feng Li
Lan-Zhe Guo
ArXivPDFHTML
Abstract

Tabular data, owing to its ubiquitous presence in real-world domains, has garnered significant attention in machine learning research. While tree-based models have long dominated tabular machine learning tasks, the recently proposed deep learning model TabPFN v2 has emerged, demonstrating unparalleled performance and scalability potential. Although extensive research has been conducted on TabPFN v2 to further improve performance, the majority of this research remains confined to closed environments, neglecting the challenges that frequently arise in open environments. This raises the question: Can TabPFN v2 maintain good performance in open environments? To this end, we conduct the first comprehensive evaluation of TabPFN v2's adaptability in open environments. We construct a unified evaluation framework covering various real-world challenges and assess the robustness of TabPFN v2 under open environments scenarios using this framework. Empirical results demonstrate that TabPFN v2 shows significant limitations in open environments but is suitable for small-scale, covariate-shifted, and class-balanced tasks. Tree-based models remain the optimal choice for general tabular tasks in open environments. To facilitate future research on open environments challenges, we advocate for open environments tabular benchmarks, multi-metric evaluation, and universal modules to strengthen model robustness. We publicly release our evaluation framework atthis https URL.

View on arXiv
@article{cheng2025_2505.16226,
  title={ Realistic Evaluation of TabPFN v2 in Open Environments },
  author={ Zi-Jian Cheng and Zi-Yi Jia and Zhi Zhou and Yu-Feng Li and Lan-Zhe Guo },
  journal={arXiv preprint arXiv:2505.16226},
  year={ 2025 }
}
Comments on this paper