Zero-Shot Multilingual Synthetic Question and Answer Generation for Cross-Lingual Reading Comprehension
- SyDa

We propose a simple method to generate multilingual question and answer pairs on a large scale through the use of a single generative model. These synthetic samples can be used to improve the zero-shot performance of multilingual QA models on target languages. Our proposed multi-task training of the generative model only requires the training samples in English, thus removing the need for labeled samples in the target languages, making it applicable to far more languages than those with labeled data. Experimental results show our proposed approach achieves significant gains on several multilingual QA benchmarks, reducing the gap between zero-shot and supervised performance of QA models on various languages.
View on arXiv