49
0

TIMeSynC: Temporal Intent Modelling with Synchronized Context Encodings for Financial Service Applications

Abstract

Users engage with financial services companies through multiple channels, often interacting with mobile applications, web platforms, call centers, and physical locations to service their accounts. The resulting interactions are recorded at heterogeneous temporal resolutions across these domains. This multi-channel data can be combined and encoded to create a comprehensive representation of the customer's journey for accurate intent prediction. This demands sequential learning solutions. NMT transformers achieve state-of-the-art sequential representation learning by encoding context and decoding for the next best action to represent long-range dependencies. However, three major challenges exist while combining multi-domain sequences within an encoder-decoder transformers architecture for intent prediction applications: a) aligning sequences with different sampling rates b) learning temporal dynamics across multi-variate, multi-domain sequences c) combining dynamic and static sequences. We propose an encoder-decoder transformer model to address these challenges for contextual and sequential intent prediction in financial servicing applications. Our experiments show significant improvement over the existing tabular method.

View on arXiv
@article{katariya2025_2410.12825,
  title={ TIMeSynC: Temporal Intent Modelling with Synchronized Context Encodings for Financial Service Applications },
  author={ Dwipam Katariya and Juan Manuel Origgi and Yage Wang and Thomas Caputo },
  journal={arXiv preprint arXiv:2410.12825},
  year={ 2025 }
}
Comments on this paper