Nexus sine qua non: Essentially Connected Networks for Traffic
Forecasting
- AI4TS
Spatial-temporal graph neural networks (STGNNs) have become the de facto models for learning spatiotemporal representations of traffic flow. However, modern STGNNs often contain superfluous or obscure components, along with complex techniques, posing significant challenges in terms of complexity and scalability. Such concerns prompt us to rethink the design of neural architectures and to identify the key challenges in traffic forecasting as spatial-temporal contextualization. Here, we present an essentially connected model based on an efficient message-passing backbone, powered by learnable node embedding, without any complex sequential techniques such as TCNs, RNNs, and Transformers. Intriguingly, empirical results demonstrate how a simple and elegant model with contextualization capability compares favorably w.r.t. the state-of-the-art with elaborate structures, while being much more interpretable and computationally efficient for traffic forecasting. We anticipate that our findings will open new horizons for further research to explore the possibility of creating simple but effective neural forecasting architectures.
View on arXiv