ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16718
39
0

CAARMA: Class Augmentation with Adversarial Mixup Regularization

20 March 2025
Massa Baali
Xiaomeng Li
H. Chen
Rita Singh
Bhiksha Raj
    VLM
ArXivPDFHTML
Abstract

Speaker verification is a typical zero-shot learning task, where inference of unseen classes is performed by comparing embeddings of test instances to known examples. The models performing inference must hence naturally generate embeddings that cluster same-class instances compactly, while maintaining separation across classes. In order to learn to do so, they are typically trained on a large number of classes (speakers), often using specialized losses. However real-world speaker datasets often lack the class diversity needed to effectively learn this in a generalizable manner. We introduce CAARMA, a class augmentation framework that addresses this problem by generating synthetic classes through data mixing in the embedding space, expanding the number of training classes. To ensure the authenticity of the synthetic classes we adopt a novel adversarial refinement mechanism that minimizes categorical distinctions between synthetic and real classes. We evaluate CAARMA on multiple speaker verification tasks, as well as other representative zero-shot comparison-based speech analysis tasks and obtain consistent improvements: our framework demonstrates a significant improvement of 8\% over all baseline models. Code for CAARMA will be released.

View on arXiv
@article{baali2025_2503.16718,
  title={ CAARMA: Class Augmentation with Adversarial Mixup Regularization },
  author={ Massa Baali and Xiang Li and Hao Chen and Rita Singh and Bhiksha Raj },
  journal={arXiv preprint arXiv:2503.16718},
  year={ 2025 }
}
Comments on this paper