ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2411.15640
73
3

AfriMed-QA: A Pan-African, Multi-Specialty, Medical Question-Answering Benchmark Dataset

23 November 2024
Tobi Olatunji
Charles Nimo
A. Owodunni
Tassallah Abdullahi
Emmanuel Ayodele
Mardhiyah Sanni
Chinemelu Aka
Folafunmi Omofoye
Foutse Yuehgoh
Timothy Faniran
Bonaventure F. P. Dossou
Moshood Yekini
Jonas Kemp
Katherine Heller
Jude Chidubem Omeke
Chidi Asuzu MD
Naome A. Etori
Aimérou Ndiaye
Ifeoma Okoh
Evans Doe Ocansey
Wendy Kinara
Michael Best
Irfan Essa
Stephen E. Moore
Chris Fourie
M. Asiedu
    LM&MA
ArXivPDFHTML
Abstract

Recent advancements in large language model(LLM) performance on medical multiple choice question (MCQ) benchmarks have stimulated interest from healthcare providers and patients globally. Particularly in low-and middle-income countries (LMICs) facing acute physician shortages and lack of specialists, LLMs offer a potentially scalable pathway to enhance healthcare access and reduce costs. However, their effectiveness in the Global South, especially across the African continent, remains to be established. In this work, we introduce AfriMed-QA, the first large scale Pan-African English multi-specialty medical Question-Answering (QA) dataset, 15,000 questions (open and closed-ended) sourced from over 60 medical schools across 16 countries, covering 32 medical specialties. We further evaluate 30 LLMs across multiple axes including correctness and demographic bias. Our findings show significant performance variation across specialties and geographies, MCQ performance clearly lags USMLE (MedQA). We find that biomedical LLMs underperform general models and smaller edge-friendly LLMs struggle to achieve a passing score. Interestingly, human evaluations show a consistent consumer preference for LLM answers and explanations when compared with clinician answers.

View on arXiv
Comments on this paper