ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.10308
21
14

Rethinking Data Augmentation for Tabular Data in Deep Learning

17 May 2023
Soma Onishi
Shoya Meguro
    LMTD
ArXivPDFHTML
Abstract

Tabular data is the most widely used data format in machine learning (ML). While tree-based methods outperform DL-based methods in supervised learning, recent literature reports that self-supervised learning with Transformer-based models outperforms tree-based methods. In the existing literature on self-supervised learning for tabular data, contrastive learning is the predominant method. In contrastive learning, data augmentation is important to generate different views. However, data augmentation for tabular data has been difficult due to the unique structure and high complexity of tabular data. In addition, three main components are proposed together in existing methods: model structure, self-supervised learning methods, and data augmentation. Therefore, previous works have compared the performance without comprehensively considering these components, and it is not clear how each component affects the actual performance. In this study, we focus on data augmentation to address these issues. We propose a novel data augmentation method, M\textbf{M}Mask T\textbf{T}Token R\textbf{R}Replacement (MTR\texttt{MTR}MTR), which replaces the mask token with a portion of each tokenized column; MTR\texttt{MTR}MTR takes advantage of the properties of Transformer, which is becoming the predominant DL-based architecture for tabular data, to perform data augmentation for each column embedding. Through experiments with 13 diverse public datasets in both supervised and self-supervised learning scenarios, we show that MTR\texttt{MTR}MTR achieves competitive performance against existing data augmentation methods and improves model performance. In addition, we discuss specific scenarios in which MTR\texttt{MTR}MTR is most effective and identify the scope of its application. The code is available at https://github.com/somaonishi/MTR/.

View on arXiv
Comments on this paper