86
0

A Case Study of Cross-Lingual Zero-Shot Generalization for Classical Languages in LLMs

Main:5 Pages
5 Figures
Bibliography:4 Pages
11 Tables
Appendix:8 Pages
Abstract

Large Language Models (LLMs) have demonstrated remarkable generalization capabilities across diverse tasks and languages. In this study, we focus on natural language understanding in three classical languages -- Sanskrit, Ancient Greek and Latin -- to investigate the factors affecting cross-lingual zero-shot generalization. First, we explore named entity recognition and machine translation into English. While LLMs perform equal to or better than fine-tuned baselines on out-of-domain data, smaller models often struggle, especially with niche or abstract entity types. In addition, we concentrate on Sanskrit by presenting a factoid question-answering (QA) dataset and show that incorporating context via retrieval-augmented generation approach significantly boosts performance. In contrast, we observe pronounced performance drops for smaller LLMs across these QA tasks. These results suggest model scale as an important factor influencing cross-lingual generalization. Assuming that models used such as GPT-4o and Llama-3.1 are not instruction fine-tuned on classical languages, our findings provide insights into how LLMs may generalize on these languages and their consequent utility in classical studies.

View on arXiv
@article{akavarapu2025_2505.13173,
  title={ A Case Study of Cross-Lingual Zero-Shot Generalization for Classical Languages in LLMs },
  author={ V.S.D.S.Mahesh Akavarapu and Hrishikesh Terdalkar and Pramit Bhattacharyya and Shubhangi Agarwal and Vishakha Deulgaonkar and Pralay Manna and Chaitali Dangarikar and Arnab Bhattacharya },
  journal={arXiv preprint arXiv:2505.13173},
  year={ 2025 }
}
Comments on this paper