ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.17421
92
2
v1v2v3 (latest)

MA4DIV: Multi-Agent Reinforcement Learning for Search Result Diversification

26 March 2024
Yiqun Chen
Jiaxin Mao
Yi Zhang
Dehong Ma
Long Xia
Jun Fan
Daiting Shi
Zhicong Cheng
Simiu Gu
Dawei Yin
ArXiv (abs)PDFHTML
Abstract

The objective of search result diversification (SRD) is to ensure that selected documents cover as many different subtopics as possible. Existing methods primarily utilize a paradigm of "greedy selection", i.e., selecting one document with the highest diversity score at a time. These approaches tend to be inefficient and are easily trapped in a suboptimal state. In addition, some other methods aim to approximately optimize the diversity metric, such as α\alphaα-NDCG, but the results still remain suboptimal. To address these challenges, we introduce Multi-Agent reinforcement learning (MARL) for search result DIVersity, which called MA4DIV. In this approach, each document is an agent and the search result diversification is modeled as a cooperative task among multiple agents. This approach allows for directly optimizing the diversity metrics, such as α\alphaα-NDCG, while achieving high training efficiency. We conducted preliminary experiments on public TREC datasets to demonstrate the effectiveness and potential of MA4DIV. Considering the limited number of queries in public TREC datasets, we construct a large-scale dataset from industry sources and show that MA4DIV achieves substantial improvements in both effectiveness and efficiency than existing baselines on a industrial scale dataset.

View on arXiv
@article{chen2025_2403.17421,
  title={ MA4DIV: Multi-Agent Reinforcement Learning for Search Result Diversification },
  author={ Yiqun Chen and Jiaxin Mao and Yi Zhang and Dehong Ma and Long Xia and Jun Fan and Daiting Shi and Zhicong Cheng and Simiu Gu and Dawei Yin },
  journal={arXiv preprint arXiv:2403.17421},
  year={ 2025 }
}
Comments on this paper