375

AstBERT: Enabling Language Model for Financial Code Understanding with Abstract Syntax Trees

Abstract

Using the pre-trained language model (i.e. BERT) to apprehend source codes has attracted increasing attention from financial institutions owing to the great potential to uncover financial risks. However, there are several challenges in applying these language models to directly solve programming language (PL) related problems. To this end, we propose the AstBERT model, a pre-trained language model aiming to better understand the financial PL using the abstract syntax tree (AST). Specifically, we collect a colossal amount of source codes (both Java and Python) from the Alipay code repository and incorporate both syntactic and semantic code knowledge into our model through the help of code parsers, in which AST information of the source codes can be interpreted and integrated. We evaluate the performance of the proposed model on three tasks, including code question answering, code clone detection and code refinement. Experiment results show that our AstBERT achieves promising performance on three downstream tasks.

View on arXiv
Comments on this paper