72
28

Zero-Shot Multilingual Synthetic Question and Answer Generation for Cross-Lingual Reading Comprehension

Abstract

We propose a simple method to generate multilingual question and answer pairs on a large scale through the use of a single generative model. These synthetic samples can be used to improve the zero-shot performance of multilingual QA models on target languages. Our proposed multi-task training of the generative model only requires the training samples in English, thus removing the need for labeled samples in the target languages, making it applicable to far more languages than those with labeled data. Experimental results show our proposed approach achieves significant gains on several multilingual QA benchmarks, reducing the gap between zero-shot and supervised performance of QA models on various languages.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.