ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.14860
65
4

Aligning LLMs to Ask Good Questions A Case Study in Clinical Reasoning

21 February 2025
Shuyue Stella Li
Jimin Mun
Faeze Brahman
Jonathan Ilgen
Yulia Tsvetkov
Maarten Sap
    LM&MA
ArXivPDFHTML
Abstract

Large language models (LLMs) often fail to ask effective questions under uncertainty, making them unreliable in domains where proactive information-gathering is essential for decisionmaking. We present ALFA, a framework that improves LLM question-asking by (i) decomposing the notion of a "good" question into a set of theory-grounded attributes (e.g., clarity, relevance), (ii) controllably synthesizing attribute-specific question variations, and (iii) aligning models via preference-based optimization to explicitly learn to ask better questions along these fine-grained attributes. Focusing on clinical reasoning as a case study, we introduce the MediQ-AskDocs dataset, composed of 17k real-world clinical interactions augmented with 80k attribute-specific preference pairs of follow-up questions, as well as a novel expert-annotated interactive healthcare QA task to evaluate question-asking abilities. Models aligned with ALFA reduce diagnostic errors by 56.6% on MediQ-AskDocs compared to SOTA instruction-tuned LLMs, with a question-level win-rate of 64.4% and strong generalizability. Our findings suggest that explicitly guiding question-asking with structured, fine-grained attributes offers a scalable path to improve LLMs, especially in expert application domains.

View on arXiv
@article{li2025_2502.14860,
  title={ Aligning LLMs to Ask Good Questions A Case Study in Clinical Reasoning },
  author={ Shuyue Stella Li and Jimin Mun and Faeze Brahman and Jonathan S. Ilgen and Yulia Tsvetkov and Maarten Sap },
  journal={arXiv preprint arXiv:2502.14860},
  year={ 2025 }
}
Comments on this paper