39
0

Self-Supervised Learning for Neural Topic Models with Variance-Invariance-Covariance Regularization

Abstract

In our study, we propose a self-supervised neural topic model (NTM) that combines the power of NTMs and regularized self-supervised learning methods to improve performance. NTMs use neural networks to learn latent topics hidden behind the words in documents, enabling greater flexibility and the ability to estimate more coherent topics compared to traditional topic models. On the other hand, some self-supervised learning methods use a joint embedding architecture with two identical networks that produce similar representations for two augmented versions of the same input. Regularizations are applied to these representations to prevent collapse, which would otherwise result in the networks outputting constant or redundant representations for all inputs. Our model enhances topic quality by explicitly regularizing latent topic representations of anchor and positive samples. We also introduced an adversarial data augmentation method to replace the heuristic sampling method. We further developed several variation models including those on the basis of an NTM that incorporates contrastive learning with both positive and negative samples. Experimental results on three datasets showed that our models outperformed baselines and state-of-the-art models both quantitatively and qualitatively.

View on arXiv
@article{xu2025_2502.09944,
  title={ Self-Supervised Learning for Neural Topic Models with Variance-Invariance-Covariance Regularization },
  author={ Weiran Xu and Kengo Hirami and Koji Eguchi },
  journal={arXiv preprint arXiv:2502.09944},
  year={ 2025 }
}
Comments on this paper