ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.23687
45
0

MKA: Leveraging Cross-Lingual Consensus for Model Abstention

31 March 2025
Sharad Duwal
ArXivPDFHTML
Abstract

Reliability of LLMs is questionable even as they get better at more tasks. A wider adoption of LLMs is contingent on whether they are usably factual. And if they are not, on whether they can properly calibrate their confidence in their responses. This work focuses on utilizing the multilingual knowledge of an LLM to inform its decision to abstain or answer when prompted. We develop a multilingual pipeline to calibrate the model's confidence and let it abstain when uncertain. We run several multilingual models through the pipeline to profile them across different languages. We find that the performance of the pipeline varies by model and language, but that in general they benefit from it. This is evidenced by the accuracy improvement of 71.2%71.2\%71.2% for Bengali over a baseline performance without the pipeline. Even a high-resource language like English sees a 15.5%15.5\%15.5% improvement. These results hint at possible further improvements.

View on arXiv
@article{duwal2025_2503.23687,
  title={ MKA: Leveraging Cross-Lingual Consensus for Model Abstention },
  author={ Sharad Duwal },
  journal={arXiv preprint arXiv:2503.23687},
  year={ 2025 }
}
Comments on this paper