ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.18062
49
0

Investigating Recent Large Language Models for Vietnamese Machine Reading Comprehension

23 March 2025
Anh Duc Nguyen
Hieu Minh Phi
Anh Viet Ngo
Long Hai Trieu
Thai Nguyen
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have shown remarkable proficiency in Machine Reading Comprehension (MRC) tasks; however, their effectiveness for low-resource languages like Vietnamese remains largely unexplored. In this paper, we fine-tune and evaluate two state-of-the-art LLMs: Llama 3 (8B parameters) and Gemma (7B parameters), on ViMMRC, a Vietnamese MRC dataset. By utilizing Quantized Low-Rank Adaptation (QLoRA), we efficiently fine-tune these models and compare their performance against powerful LLM-based baselines. Although our fine-tuned models are smaller than GPT-3 and GPT-3.5, they outperform both traditional BERT-based approaches and these larger models. This demonstrates the effectiveness of our fine-tuning process, showcasing how modern LLMs can surpass the capabilities of older models like BERT while still being suitable for deployment in resource-constrained environments. Through intensive analyses, we explore various aspects of model performance, providing valuable insights into adapting LLMs for low-resource languages like Vietnamese. Our study contributes to the advancement of natural language processing in low-resource languages, and we make our fine-tuned models publicly available at:this https URL.

View on arXiv
@article{nguyen2025_2503.18062,
  title={ Investigating Recent Large Language Models for Vietnamese Machine Reading Comprehension },
  author={ Anh Duc Nguyen and Hieu Minh Phi and Anh Viet Ngo and Long Hai Trieu and Thai Phuong Nguyen },
  journal={arXiv preprint arXiv:2503.18062},
  year={ 2025 }
}
Comments on this paper