ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.13434
17
0

SMOTExT: SMOTE meets Large Language Models

19 May 2025
Mateusz Bystroński
Mikołaj Hołysz
Grzegorz Piotrowski
Nitesh Chawla
Tomasz Kajdanowicz
ArXivPDFHTML
Abstract

Data scarcity and class imbalance are persistent challenges in training robust NLP models, especially in specialized domains or low-resource settings. We propose a novel technique, SMOTExT, that adapts the idea of Synthetic Minority Over-sampling (SMOTE) to textual data. Our method generates new synthetic examples by interpolating between BERT-based embeddings of two existing examples and then decoding the resulting latent point into text with xRAG architecture. By leveraging xRAG's cross-modal retrieval-generation framework, we can effectively turn interpolated vectors into coherent text. While this is preliminary work supported by qualitative outputs only, the method shows strong potential for knowledge distillation and data augmentation in few-shot settings. Notably, our approach also shows promise for privacy-preserving machine learning: in early experiments, training models solely on generated data achieved comparable performance to models trained on the original dataset. This suggests a viable path toward safe and effective learning under data protection constraints.

View on arXiv
@article{bystroński2025_2505.13434,
  title={ SMOTExT: SMOTE meets Large Language Models },
  author={ Mateusz Bystroński and Mikołaj Hołysz and Grzegorz Piotrowski and Nitesh V. Chawla and Tomasz Kajdanowicz },
  journal={arXiv preprint arXiv:2505.13434},
  year={ 2025 }
}
Comments on this paper