20
0

Parsing the Switch: LLM-Based UD Annotation for Complex Code-Switched and Low-Resource Languages

Main:9 Pages
7 Figures
Bibliography:2 Pages
16 Tables
Appendix:5 Pages
Abstract

Code-switching presents a complex challenge for syntactic analysis, especially in low-resource language settings where annotated data is scarce. While recent work has explored the use of large language models (LLMs) for sequence-level tagging, few approaches systematically investigate how well these models capture syntactic structure in code-switched contexts. Moreover, existing parsers trained on monolingual treebanks often fail to generalize to multilingual and mixed-language input. To address this gap, we introduce the BiLingua Parser, an LLM-based annotation pipeline designed to produce Universal Dependencies (UD) annotations for code-switched text. First, we develop a prompt-based framework for Spanish-English and Spanish-Guaraní data, combining few-shot LLM prompting with expert review. Second, we release two annotated datasets, including the first Spanish-Guaraní UD-parsed corpus. Third, we conduct a detailed syntactic analysis of switch points across language pairs and communicative contexts. Experimental results show that BiLingua Parser achieves up to 95.29% LAS after expert revision, significantly outperforming prior baselines and multilingual parsers. These results show that LLMs, when carefully guided, can serve as practical tools for bootstrapping syntactic resources in under-resourced, code-switched environments. Data and source code are available atthis https URL

View on arXiv
@article{kellert2025_2506.07274,
  title={ Parsing the Switch: LLM-Based UD Annotation for Complex Code-Switched and Low-Resource Languages },
  author={ Olga Kellert and Nemika Tyagi and Muhammad Imran and Nelvin Licona-Guevara and Carlos Gómez-Rodríguez },
  journal={arXiv preprint arXiv:2506.07274},
  year={ 2025 }
}
Comments on this paper