42
0

TSRating: Rating Quality of Diverse Time Series Data by Meta-learning from LLM Judgment

Main:9 Pages
6 Figures
Bibliography:4 Pages
12 Tables
Appendix:13 Pages
Abstract

High-quality time series (TS) data are essential for ensuring TS model performance, rendering research on rating TS data quality indispensable. Existing methods have shown promising rating accuracy within individual domains, primarily by extending data quality rating techniques such as influence functions and Shapley values to account for temporal characteristics. However, they neglect the fact that real-world TS data can span vastly different domains and exhibit distinct properties, hampering the accurate and efficient rating of diverse TS data. In this paper, we propose TSRating, a novel and unified framework for rating the quality of time series data crawled from diverse domains. TSRating is built on the assumption that LLMs inherit ample knowledge, acquired during their extensive pretraining, enabling them to comprehend and discern quality differences in diverse TS data. We verify this assumption by devising a series of prompts to elicit quality comparisons from LLMs for pairs of TS samples. We then fit a dedicated rating model, termed TSRater, to convert the LLMs' judgments into efficient quality predictions via TSRater's inference on future TS samples. To ensure cross-domain adaptability, we develop a meta-learning scheme to train TSRater on quality comparisons collected from nine distinct domains. To improve training efficiency, we employ signSGD for inner-loop updates, thus circumventing the demanding computation of hypergradients. Extensive experimental results on eleven benchmark datasets across three time series tasks, each using both conventional TS models and TS foundation models, demonstrate that TSRating outperforms baselines in terms of estimation accuracy, efficiency, and domain adaptability.

View on arXiv
@article{wu2025_2506.01290,
  title={ TSRating: Rating Quality of Diverse Time Series Data by Meta-learning from LLM Judgment },
  author={ Shunyu Wu and Dan Li and Haozheng Ye and Zhuomin Chen and Jiahui Zhou and Jian Lou and Zibin Zheng and See-Kiong Ng },
  journal={arXiv preprint arXiv:2506.01290},
  year={ 2025 }
}
Comments on this paper