676

TiltedBERT: Resource Adjustable Version of BERT

IEEE Transactions on Neural Networks and Learning Systems (TNNLS), 2022
Abstract

In this paper, a novel adjustable fine-tuning method is proposed that improves the inference time of BERT model on downstream tasks. The proposed method detects the more important word vectors in each layer by the proposed Attention Context Contribution (ACC) metric and eliminates the less important word vectors by the proposed strategy. In the TiltedBERT method the model learns to work with a considerably lower number of Floating Point Operations (FLOPs) than the original BERTbase model. The proposed method does not need training from scratch, and it can be generalized to other transformer-based models. The extensive experiments show that the word vectors in higher layers have less contribution that can be eliminated and improve the inference time. Experimental results on extensive sentiment analysis, classification and regression datasets, and benchmarks like IMDB and GLUE showed that TiltedBERT is effective in various datasets. TiltedBERT improves the inference time of BERTbase up to 4.8 times with less than 0.75% accuracy drop on average. After the fine-tuning by the offline-tuning property, the inference time of the model can be adjusted for a wide range of Tilt-Rate selections. Also, A mathematical speedup analysis is proposed to estimate TiltedBERT method's speedup accurately. With the help of this analysis, a proper Tilt-Rate value can be selected before fine-tuning and during offline-tuning phases.

View on arXiv
Comments on this paper