56
103

Modeling the Past and Future Contexts for Session-based Recommendation

Abstract

Session-based recommender systems have attracted much attention recently. To capture the sequential dependencies, previous sequential recommendation models resort either to data augmentation techniques or a left-to-right style autoregressive training approach. While effective, an obvious drawback is that future user behaviors are always missing during model training. In this paper, we argue that users' future action signals can be exploited to boost the recommendation quality. We present GRec, a simple Gap-filling based encoder-decoder Recommendation framework to generative modelling using both past and future contexts. GfedRec encodes a partially-complete item sequence with blank masks, and autoregressively reconstructs the missing item distributions. In contrast with the typical encoder-decoder paradigm used in the computer vision and NLP domains, GfedRec does not have the data leakage problem when jointly training the encoder and decoder conditioned on the same user action sequence. Experiments on real-word datasets with short-, medium- and long-range user sessions show that GRec largely exceeds the performance of its left-to-right counterparts. Empirical evidence confirms that training sequential recommendation models with future contexts is a promising way to offer better recommendation accuracy.

View on arXiv
Comments on this paper