ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.18571
  4. Cited By
Challenges in Adapting Multilingual LLMs to Low-Resource Languages using
  LoRA PEFT Tuning

Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning

27 November 2024
Omkar Khade
Shruti Jagdale
Abhishek Phaltankar
Gauri Takalikar
Raviraj Joshi
ArXiv (abs)PDFHTML

Papers citing "Challenges in Adapting Multilingual LLMs to Low-Resource Languages using LoRA PEFT Tuning"

5 / 5 papers shown
Title
Typhoon T1: An Open Thai Reasoning Model
Typhoon T1: An Open Thai Reasoning Model
Pittawat Taveekitworachai
Potsawee Manakul
Kasima Tharnpipitchai
Kunat Pipatanakul
OffRLLRM
223
0
0
13 Feb 2025
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Zeyu Han
Chao Gao
Jinyang Liu
Jeff Zhang
Sai Qian Zhang
251
398
0
21 Mar 2024
LoRA: Low-Rank Adaptation of Large Language Models
LoRA: Low-Rank Adaptation of Large Language Models
J. E. Hu
Yelong Shen
Phillip Wallis
Zeyuan Allen-Zhu
Yuanzhi Li
Shean Wang
Lu Wang
Weizhu Chen
OffRLAI4TSAI4CEALMAIMat
502
10,526
0
17 Jun 2021
Multilingual Translation with Extensible Multilingual Pretraining and
  Finetuning
Multilingual Translation with Extensible Multilingual Pretraining and Finetuning
Y. Tang
C. Tran
Xian Li
Peng-Jen Chen
Naman Goyal
Vishrav Chaudhary
Jiatao Gu
Angela Fan
CLL
137
462
0
02 Aug 2020
MultiFiT: Efficient Multi-lingual Language Model Fine-tuning
MultiFiT: Efficient Multi-lingual Language Model Fine-tuning
Julian Martin Eisenschlos
Sebastian Ruder
Piotr Czapla
Marcin Kardas
Sylvain Gugger
Jeremy Howard
51
99
0
10 Sep 2019
1