8
6

Improving Social Meaning Detection with Pragmatic Masking and Surrogate Fine-Tuning

Abstract

Masked language models (MLMs) are pre-trained with a denoising objective that is in a mismatch with the objective of downstream fine-tuning. We propose pragmatic masking and surrogate fine-tuning as two complementing strategies that exploit social cues to drive pre-trained representations toward a broad set of concepts useful for a wide class of social meaning tasks. We test our models on 1515 different Twitter datasets for social meaning detection. Our methods achieve 2.34%2.34\% F1F_1 over a competitive baseline, while outperforming domain-specific language models pre-trained on large datasets. Our methods also excel in few-shot learning: with only 5%5\% of training data (severely few-shot), our methods enable an impressive 68.54%68.54\% average F1F_1. The methods are also language agnostic, as we show in a zero-shot setting involving six datasets from three different languages.

View on arXiv
Comments on this paper