Time Series Generation with Masked Autoencoder
- SyDaAI4TS
This paper shows that masked autoencoder with interpolator (InterpoMAE) is a scalable self-supervised generative model for time series. InterpoMAE masks random patches of the input time series and recover the missing patches in latent space. The core design is that no mask token is used. InterpoMAE disentangles missing patch recovery from the decoder. An interpolator directly recovers the missing patches without mask tokens. This design helps InterpoMAE to consistently and significantly outperforms state-of-the-art (SoTA) benchmarks in time series generation. Our approach also shows promising scaling behaviour in various downstream tasks such as time series classification, prediction and imputation. As the only self-supervised generative model for time series, InterpoMAE is the first in literature that allows explicit management on the synthetic data. Time series generation may follow the trajectory of self-supervised learning now.
View on arXiv