Augmenting Math Word Problems via Iterative Question Composing
- SyDaLRM
Despite recent progress in improving the mathematical reasoning ability of large language models(LLMs), solving competition-level math problems without the use of external tools remains challenging for open-source LLMs. In this work, we introduce the MMIQC dataset, a mixture of processed web data and synthetic question-response pairs, to equip base models with better mathematical reasoning skills. In different model sizes, the models fine-tuned on MMIQC consistently outperform their counterparts by a clear margin on MATH test set. Notably, DeepSeek-67B-MMIQC achieves a 41.0% accuracy, 4.2% higher than the previous open-source SOTA. Our experiments also show that a large part of the improvement can be attributed to our novel augmentation method IQC(Iterative Question Composing), where we iteratively ask an LLM to compose new questions from the given seed problems and do rejection sampling from another LLM. MMIQC has now been released on https://huggingface.co/datasets/Vivacem/MMIQC.
View on arXiv