Code-mixed languages, characterized by frequent within-sentence language transitions, present structural challenges that standard language models fail to address. In this work, we propose CMLFormer, an enhanced multi-layer dual-decoder Transformer with a shared encoder and synchronized decoder cross-attention, designed to model the linguistic and semantic dynamics of code-mixed text. CMLFormer is pre-trained on an augmented Hinglish corpus with switching point and translation annotations with multiple new objectives specifically aimed at capturing switching behavior, cross-lingual structure, and code-mixing complexity. Our experiments show that CMLFormer improves F1 score, precision, and accuracy over other approaches on the HASOC-2021 benchmark under select pre-training setups. Attention analyses further show that it can identify and attend to switching points, validating its sensitivity to code-mixed structure. These results demonstrate the effectiveness of CMLFormer's architecture and multi-task pre-training strategy for modeling code-mixed languages.
View on arXiv@article{baral2025_2505.12587, title={ CMLFormer: A Dual Decoder Transformer with Switching Point Learning for Code-Mixed Language Modeling }, author={ Aditeya Baral and Allen George Ajith and Roshan Nayak and Mrityunjay Abhijeet Bhanja }, journal={arXiv preprint arXiv:2505.12587}, year={ 2025 } }