41
0

Binary Cumulative Encoding meets Time Series Forecasting

Main:9 Pages
7 Figures
Bibliography:2 Pages
13 Tables
Appendix:4 Pages
Abstract

Recent studies in time series forecasting have explored formulating regression via classification task. By discretizing the continuous target space into bins and predicting over a fixed set of classes, these approaches benefit from stable training, robust uncertainty modeling, and compatibility with modern deep learning architectures. However, most existing methods rely on one-hot encoding that ignores the inherent ordinal structure of the underlying values. As a result, they fail to provide information about the relative distance between predicted and true values during training. In this paper, we propose to address this limitation by introducing binary cumulative encoding (BCE), that represents scalar targets into monotonic binary vectors. This encoding implicitly preserves order and magnitude information, allowing the model to learn distance-aware representations while still operating within a classification framework. We propose a convolutional neural network architecture specifically designed for BCE, incorporating residual and dilated convolutions to enable fast and expressive temporal modeling. Through extensive experiments on benchmark forecasting datasets, we show that our approach outperforms widely used methods in both point and probabilistic forecasting, while requiring fewer parameters and enabling faster training.

View on arXiv
@article{chernov2025_2505.24595,
  title={ Binary Cumulative Encoding meets Time Series Forecasting },
  author={ Andrei Chernov and Vitaliy Pozdnyakov and Ilya Makarov },
  journal={arXiv preprint arXiv:2505.24595},
  year={ 2025 }
}
Comments on this paper