ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.12924
13
1
v1v2 (latest)

Conditioning LLMs to Generate Code-Switched Text

18 February 2025
Maite Heredia
Gorka Labaka
Jeremy Barnes
A. Soroa
ArXiv (abs)PDFHTML
Main:8 Pages
3 Figures
Bibliography:3 Pages
8 Tables
Appendix:4 Pages
Abstract

Code-switching (CS) is still a critical challenge in Natural Language Processing (NLP). Current Large Language Models (LLMs) struggle to interpret and generate code-switched text, primarily due to the scarcity of large-scale CS datasets for training. This paper presents a novel methodology to generate CS data using LLMs, and test it on the English-Spanish language pair. We propose back-translating natural CS sentences into monolingual English, and using the resulting parallel corpus to fine-tune LLMs to turn monolingual sentences into CS. Unlike previous approaches to CS generation, our methodology uses natural CS data as a starting point, allowing models to learn its natural distribution beyond grammatical patterns. We thoroughly analyse the models' performance through a study on human preferences, a qualitative error analysis and an evaluation with popular automatic metrics. Results show that our methodology generates fluent code-switched text, expanding research opportunities in CS communication, and that traditional metrics do not correlate with human judgement when assessing the quality of the generated CS data. We release our code and generated dataset under a CC-BY-NC-SA license.

View on arXiv
@article{heredia2025_2502.12924,
  title={ Conditioning LLMs to Generate Code-Switched Text },
  author={ Maite Heredia and Gorka Labaka and Jeremy Barnes and Aitor Soroa },
  journal={arXiv preprint arXiv:2502.12924},
  year={ 2025 }
}
Comments on this paper