105
0

QA-prompting: Improving Summarization with Large Language Models using Question-Answering

Main:8 Pages
4 Figures
Bibliography:3 Pages
9 Tables
Appendix:3 Pages
Abstract

Language Models (LMs) have revolutionized natural language processing, enabling high-quality text generation through prompting and in-context learning. However, models often struggle with long-context summarization due to positional biases, leading to suboptimal extraction of critical information. There are techniques to improve this with fine-tuning, pipelining, or using complex techniques, which have their own challenges. To solve these challenges, we propose QA-prompting - a simple prompting method for summarization that utilizes question-answering as an intermediate step prior to summary generation. Our method extracts key information and enriches the context of text to mitigate positional biases and improve summarization in a single LM call per task without requiring fine-tuning or pipelining. Experiments on multiple datasets belonging to different domains using ten state-of-the-art pre-trained models demonstrate that QA-prompting outperforms baseline and other state-of-the-art methods, achieving up to 29% improvement in ROUGE scores. This provides an effective and scalable solution for summarization and highlights the importance of domain-specific question selection for optimal performance.

View on arXiv
@article{sinha2025_2505.14347,
  title={ QA-prompting: Improving Summarization with Large Language Models using Question-Answering },
  author={ Neelabh Sinha },
  journal={arXiv preprint arXiv:2505.14347},
  year={ 2025 }
}
Comments on this paper