43
0

Never Skip a Batch: Continuous Training of Temporal GNNs via Adaptive Pseudo-Supervision

Abstract

Temporal Graph Networks (TGNs), while being accurate, face significant training inefficiencies due to irregular supervision signals in dynamic graphs, which induce sparse gradient updates. We first theoretically establish that aggregating historical node interactions into pseudo-labels reduces gradient variance, accelerating convergence. Building on this analysis, we propose History-Averaged Labels (HAL), a method that dynamically enriches training batches with pseudo-targets derived from historical label distributions. HAL ensures continuous parameter updates without architectural modifications by converting idle computation into productive learning steps. Experiments on the Temporal Graph Benchmark (TGB) validate our findings and an assumption about slow change of user preferences: HAL accelerates TGNv2 training by up to 15x while maintaining competitive performance. Thus, this work offers an efficient, lightweight, architecture-agnostic, and theoretically motivated solution to label sparsity in temporal graph learning.

View on arXiv
@article{panyshev2025_2505.12526,
  title={ Never Skip a Batch: Continuous Training of Temporal GNNs via Adaptive Pseudo-Supervision },
  author={ Alexander Panyshev and Dmitry Vinichenko and Oleg Travkin and Roman Alferov and Alexey Zaytsev },
  journal={arXiv preprint arXiv:2505.12526},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.