Syntax Aware LSTM Model for Chinese Semantic Role Labeling

Traditional approaches to Semantic Role Labeling (SRL) depend heavily on manual feature engineering. Recurrent neural network (RNN) with long-short-term memory (LSTM) only treats sentence as sequence data and can not utilize higher level syntactic information. In this paper, we propose Syntax Aware LSTM (SA-LSTM) which gives RNN-LSTM ability to utilize higher level syntactic information gained from dependency relationship information. SA-LSTM also assigns different trainable weights to different types of dependency relationship automatically. Experiment results on Chinese Proposition Bank (CPB) show that, even without pre-training or introducing any other extra semantically annotated resources, our SA-LSTM model still outperforms the state of the art significantly base on Student's t-test (). Trained weights of types of dependency relationship form a stable and self-explanatory pattern.
View on arXiv