28
12

Semi-Supervised Learning for Text Classification by Layer Partitioning

Abstract

Most recent neural semi-supervised learning algorithms rely on adding small perturbation to either the input vectors or their representations. These methods have been successful on computer vision tasks as the images form a continuous manifold, but are not appropriate for discrete input such as sentence. To adapt these methods to text input, we propose to decompose a neural network MM into two components FF and UU so that M=UFM = U\circ F. The layers in FF are then frozen and only the layers in UU will be updated during most time of the training. In this way, FF serves as a feature extractor that maps the input to high-level representation and adds systematical noise using dropout. We can then train UU using any state-of-the-art SSL algorithms such as Π\Pi-model, temporal ensembling, mean teacher, etc. Furthermore, this gradually unfreezing schedule also prevents a pretrained model from catastrophic forgetting. The experimental results demonstrate that our approach provides improvements when compared to state of the art methods especially on short texts.

View on arXiv
Comments on this paper