ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02978
36
0

On the Robustness of Tabular Foundation Models: Test-Time Attacks and In-Context Defenses

3 June 2025
Mohamed Djilani
Thibault Simonetto
Karim Tit
Florian Tambon
Paul Récamier
Salah Ghamizi
Maxime Cordy
Mike Papadakis
    AAML
ArXivPDFHTML
Abstract

Recent tabular Foundational Models (FM) such as TabPFN and TabICL, leverage in-context learning to achieve strong performance without gradient updates or fine-tuning. However, their robustness to adversarial manipulation remains largely unexplored. In this work, we present a comprehensive study of the adversarial vulnerabilities of tabular FM, focusing on both their fragility to targeted test-time attacks and their potential misuse as adversarial tools. We show on three benchmarks in finance, cybersecurity and healthcare, that small, structured perturbations to test inputs can significantly degrade prediction accuracy, even when training context remain fixed. Additionally, we demonstrate that tabular FM can be repurposed to generate transferable evasion to conventional models such as random forests and XGBoost, and on a lesser extent to deep tabular models. To improve tabular FM, we formulate the robustification problem as an optimization of the weights (adversarial fine-tuning), or the context (adversarial in-context learning). We introduce an in-context adversarial training strategy that incrementally replaces the context with adversarial perturbed instances, without updating model weights. Our approach improves robustness across multiple tabular benchmarks. Together, these findings position tabular FM as both a target and a source of adversarial threats, highlighting the urgent need for robust training and evaluation practices in this emerging paradigm.

View on arXiv
@article{djilani2025_2506.02978,
  title={ On the Robustness of Tabular Foundation Models: Test-Time Attacks and In-Context Defenses },
  author={ Mohamed Djilani and Thibault Simonetto and Karim Tit and Florian Tambon and Paul Récamier and Salah Ghamizi and Maxime Cordy and Mike Papadakis },
  journal={arXiv preprint arXiv:2506.02978},
  year={ 2025 }
}
Comments on this paper