The integration of large language models (LLMs) into global applications necessitates effective cultural alignment for meaningful and culturally-sensitive interactions. Current LLMs often lack the nuanced understanding required for diverse cultural contexts, and adapting them typically involves costly full fine-tuning. To address this, we introduce a novel soft prompt fine-tuning framework that enables efficient and modular cultural alignment. Our method utilizes vectorized prompt tuning to dynamically route queries to a committee of culturally specialized éxpert' LLM configurations, created by optimizing soft prompt embeddings without altering the base model's parameters. Extensive experiments demonstrate that our framework significantly enhances cultural sensitivity and adaptability, improving alignment scores from 0.208 to 0.820, offering a robust solution for culturally-aware LLM deployment. This research paves the way for subsequent investigations into enhanced cultural coverage and dynamic expert adaptation, crucial for realizing autonomous AI with deeply nuanced understanding in a globally interconnected world.
View on arXiv@article{feng2025_2506.00242, title={ Whispers of Many Shores: Cultural Alignment through Collaborative Cultural Expertise }, author={ Shuai Feng and Wei-Chuang Chan and Srishti Chouhan and Junior Francisco Garcia Ayala and Srujananjali Medicherla and Kyle Clark and Mingwei Shi }, journal={arXiv preprint arXiv:2506.00242}, year={ 2025 } }