29
0

Towards Anomaly-Aware Pre-Training and Fine-Tuning for Graph Anomaly Detection

Abstract

Graph anomaly detection (GAD) has garnered increasing attention in recent years, yet remains challenging due to two key factors: (1) label scarcity stemming from the high cost of annotations and (2) homophily disparity at node and class levels. In this paper, we introduce Anomaly-Aware Pre-Training and Fine-Tuning (APF), a targeted and effective framework to mitigate the above challenges in GAD. In the pre-training stage, APF incorporates node-specific subgraphs selected via the Rayleigh Quotient, a label-free anomaly metric, into the learning objective to enhance anomaly awareness. It further introduces two learnable spectral polynomial filters to jointly learn dual representations that capture both general semantics and subtle anomaly cues. During fine-tuning, a gated fusion mechanism adaptively integrates pre-trained representations across nodes and dimensions, while an anomaly-aware regularization loss encourages abnormal nodes to preserve more anomaly-relevant information. Furthermore, we theoretically show that APF tends to achieve linear separability under mild conditions. Comprehensive experiments on 10 benchmark datasets validate the superior performance of APF in comparison to state-of-the-art baselines.

View on arXiv
@article{liu2025_2504.14250,
  title={ Towards Anomaly-Aware Pre-Training and Fine-Tuning for Graph Anomaly Detection },
  author={ Yunhui Liu and Jiashun Cheng and Yiqing Lin and Qizhuo Xie and Jia Li and Fugee Tsung and Hongzhi Yin and Tao Zheng and Jianhua Zhao and Tieke He },
  journal={arXiv preprint arXiv:2504.14250},
  year={ 2025 }
}
Comments on this paper