Tabular foundational models have exhibited strong in-context learning (ICL) capabilities on structured data, allowing them to make accurate predictions on test sets without parameter updates, using training examples as context. This emerging approach positions itself as a competitive alternative to traditional gradient-boosted tree methods. However, while biases in conventional machine learning models are well documented, it remains unclear how these biases manifest in tabular ICL. The paper investigates the fairness implications of tabular ICL and explores three preprocessing strategies--correlation removal, group-balanced demonstration selection, and uncertainty-based demonstration selection--to address bias. Comprehensive experiments indicate that uncertainty-based demonstration selection consistently enhances group fairness of in-context predictions. The source code for reproducing the results of this work can be found atthis https URL.
View on arXiv@article{kenfack2025_2505.09503, title={ Towards Fair In-Context Learning with Tabular Foundation Models }, author={ Patrik Kenfack and Samira Ebrahimi Kahou and Ulrich Aïvodji }, journal={arXiv preprint arXiv:2505.09503}, year={ 2025 } }