ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.11029
10
0

Output Scaling: YingLong-Delayed Chain of Thought in a Large Pretrained Time Series Forecasting Model

20 May 2025
Xue Wang
Tian Zhou
Jinyang Gao
Bolin Ding
Jingren Zhou
    AI4TSAI4CELRM
ArXiv (abs)PDFHTML
Main:9 Pages
8 Figures
Bibliography:4 Pages
15 Tables
Appendix:7 Pages
Abstract

We present a joint forecasting framework for time series prediction that contrasts with traditional direct or recursive methods. This framework achieves state-of-the-art performance for our designed foundation model, YingLong, and reveals a novel scaling effect: longer outputs significantly enhance model accuracy due to delayed chain-of-thought reasoning in our non-causal approach. YingLong is a non-causal, bidirectional attention encoder-only transformer trained through masked token recovery, aligning more effectively with language understanding tasks than with generation tasks. Additionally, we boost performance by tackling output variance with a multi-input ensemble. We release four foundation models ranging from 6M to 300M parameters, demonstrating superior results in zero-shot tasks on the ETT and Weather datasets. YingLong achieves more than 60% best performance. To ensure generalizability, we assessed the models using the GIFT-Eval benchmark, which comprises 23 time series datasets across 7 domains. Yinglong significantly outperformed the best time-series foundation models, end-to-end trained models by 14% and 44% in rankthis http URLpretrained 300M model is available atthis https URL

View on arXiv
@article{wang2025_2506.11029,
  title={ Output Scaling: YingLong-Delayed Chain of Thought in a Large Pretrained Time Series Forecasting Model },
  author={ Xue Wang and Tian Zhou and Jinyang Gao and Bolin Ding and Jingren Zhou },
  journal={arXiv preprint arXiv:2506.11029},
  year={ 2025 }
}
Comments on this paper