QQSUM: A Novel Task and Model of Quantitative Query-Focused Summarization for Review-based Product Question Answering
- RALM

Review-based Product Question Answering (PQA) allows e-commerce platforms to automatically address customer queries by leveraging insights from user reviews. However, existing PQA systems generate answers with only a single perspective, failing to capture the diversity of customer opinions. In this paper we introduce a novel task Quantitative Query-Focused Summarization (QQSUM), which aims to summarize diverse customer opinions into representative Key Points (KPs) and quantify their prevalence to effectively answer user queries. While Retrieval-Augmented Generation (RAG) shows promise for PQA, its generated answers still fall short of capturing the full diversity of viewpoints. To tackle this challenge, our model QQSUM-RAG, which extends RAG, employs few-shot learning to jointly train a KP-oriented retriever and a KP summary generator, enabling KP-based summaries that capture diverse and representative opinions. Experimental results demonstrate that QQSUM-RAG achieves superior performance compared to state-of-the-art RAG baselines in both textual quality and quantification accuracy of opinions. Our source code is available at:this https URL
View on arXiv@article{tang2025_2506.04020, title={ QQSUM: A Novel Task and Model of Quantitative Query-Focused Summarization for Review-based Product Question Answering }, author={ An Quang Tang and Xiuzhen Zhang and Minh Ngoc Dinh and Zhuang Li }, journal={arXiv preprint arXiv:2506.04020}, year={ 2025 } }