SoTCKGE:Continual Knowledge Graph Embedding Based on Spatial Offset Transformation
Current Continual Knowledge Graph Embedding (CKGE) methods primarily rely on translation-based embedding methods, leveraging previously acquired knowledge to initialize new facts. To enhance learning efficiency, these methods often integrate fine-tuning or continual learning strategies. However, this compromises the model's prediction accuracy and the translation-based methods lack support for complex relational structures (multi-hop relations). To tackle this challenge, we propose a novel CKGE framework SoTCKGE grounded in Spatial Offset Transformation. Within this framework, entity positions are defined as being jointly determined by base position vectors and offset vectors. This not only enhances the model's ability to represent complex relational structures but also allows for the embedding update of both new and old knowledge through simple spatial offset transformations, without the need for continuous learning methods. Furthermore, we introduce a hierarchical update strategy and a balanced embedding method to refine the parameter update process, effectively minimizing training costs and augmenting model accuracy. To comprehensively assess the performance of our model, we have conducted extensive experimlents on four publicly accessible datasets and a new dataset constructed by us. Experimental results demonstrate the advantage of our model in enhancing multi-hop relationship learning and further improving prediction accuracy.
View on arXiv@article{wang2025_2503.08189, title={ SoTCKGE:Continual Knowledge Graph Embedding Based on Spatial Offset Transformation }, author={ Xinyan Wang and Jinshuo Liu and Cheng Bi and Kaijian Xie and Meng Wang and Juan Deng and Jeff Pan }, journal={arXiv preprint arXiv:2503.08189}, year={ 2025 } }