54
0

Statement-Tuning Enables Efficient Cross-lingual Generalization in Encoder-only Models

Main:8 Pages
11 Figures
Bibliography:6 Pages
8 Tables
Appendix:9 Pages
Abstract

Large Language Models (LLMs) excel in zero-shot and few-shot tasks, but achieving similar performance with encoder-only models like BERT and RoBERTa has been challenging due to their architecture. However, encoders offer advantages such as lower computational and memory costs. Recent work adapts them for zero-shot generalization using Statement Tuning, which reformulates tasks into finite templates. We extend this approach to multilingual NLP, exploring whether encoders can achieve zero-shot cross-lingual generalization and serve as efficient alternatives to memory-intensive LLMs for low-resource languages. Our results show that state-of-the-art encoder models generalize well across languages, rivaling multilingual LLMs while being more efficient. We also analyze multilingual Statement Tuning dataset design, efficiency gains, and language-specific generalization, contributing to more inclusive and resource-efficient NLP models. We release our code and models.

View on arXiv
@article{elshabrawy2025_2506.01592,
  title={ Statement-Tuning Enables Efficient Cross-lingual Generalization in Encoder-only Models },
  author={ Ahmed Elshabrawy and Thanh-Nhi Nguyen and Yeeun Kang and Lihan Feng and Annant Jain and Faadil Abdullah Shaikh and Jonibek Mansurov and Mohamed Fazli Mohamed Imam and Jesus-German Ortiz-Barajas and Rendi Chevi and Alham Fikri Aji },
  journal={arXiv preprint arXiv:2506.01592},
  year={ 2025 }
}
Comments on this paper