Domain Lexical Knowledge-based Word Embedding Learning for Text Classification under Small Data

Pre-trained language models such as BERT have been proved to be powerful in many natural language processing tasks. But in some text classification applications such as emotion recognition and sentiment analysis, BERT may not lead to satisfactory performance. This often happens in applications where keywords play critical roles in the prediction of class labels. Our investigation found that the root cause of the problem is that the context-based BERT embedding of the keywords may not be discriminative enough to produce discriminative text representation for classification. Motivated by this finding, we develop a method to enhance word embeddings using domain-specific lexical knowledge. The knowledge-based embedding enhancement model projects the BERT embedding into a new space where within-class similarity and between-class difference are maximized. To implement the knowledge-based word embedding enhancement model, we also develop a knowledge acquisition algorithm for automatically collecting lexical knowledge from online open sources. Experiment results on three classification tasks, including sentiment analysis, emotion recognition and question answering, have shown the effectiveness of our proposed word embedding enhancing model. The codes and datasets are inthis https URL.
View on arXiv@article{zhu2025_2506.01621, title={ Domain Lexical Knowledge-based Word Embedding Learning for Text Classification under Small Data }, author={ Zixiao Zhu and Kezhi Mao }, journal={arXiv preprint arXiv:2506.01621}, year={ 2025 } }