33
0

FaCTR: Factorized Channel-Temporal Representation Transformers for Efficient Time Series Forecasting

Main:9 Pages
18 Figures
Bibliography:3 Pages
16 Tables
Appendix:15 Pages
Abstract

While Transformers excel in language and vision-where inputs are semantically rich and exhibit univariate dependency structures-their architectural complexity leads to diminishing returns in time series forecasting. Time series data is characterized by low per-timestep information density and complex dependencies across channels and covariates, requiring conditioning on structured variable interactions. To address this mismatch and overparameterization, we propose FaCTR, a lightweight spatiotemporal Transformer with an explicitly structural design. FaCTR injects dynamic, symmetric cross-channel interactions-modeled via a low-rank Factorization Machine into temporally contextualized patch embeddings through a learnable gating mechanism. It further encodes static and dynamic covariates for multivariate conditioning. Despite its compact design, FaCTR achieves state-of-the-art performance on eleven public forecasting benchmarks spanning both short-term and long-term horizons, with its largest variant using close to only 400K parameters-on average 50x smaller than competitive spatiotemporal transformer baselines. In addition, its structured design enables interpretability through cross-channel influence scores-an essential requirement for real-world decision-making. Finally, FaCTR supports self-supervised pretraining, positioning it as a compact yet versatile foundation for downstream time series tasks.

View on arXiv
@article{vijay2025_2506.05597,
  title={ FaCTR: Factorized Channel-Temporal Representation Transformers for Efficient Time Series Forecasting },
  author={ Yash Vijay and Harini Subramanyan },
  journal={arXiv preprint arXiv:2506.05597},
  year={ 2025 }
}
Comments on this paper