ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.15856
29
0

DisastIR: A Comprehensive Information Retrieval Benchmark for Disaster Management

20 May 2025
Kai Yin
Xiangjue Dong
Chengkai Liu
Lipai Huang
Yiming Xiao
Zhewei Liu
Ali Mostafavi
James Caverlee
ArXivPDFHTML
Abstract

Effective disaster management requires timely access to accurate and contextually relevant information. Existing Information Retrieval (IR) benchmarks, however, focus primarily on general or specialized domains, such as medicine or finance, neglecting the unique linguistic complexity and diverse information needs encountered in disaster management scenarios. To bridge this gap, we introduce DisastIR, the first comprehensive IR evaluation benchmark specifically tailored for disaster management. DisastIR comprises 9,600 diverse user queries and more than 1.3 million labeled query-passage pairs, covering 48 distinct retrieval tasks derived from six search intents and eight general disaster categories that include 301 specific event types. Our evaluations of 30 state-of-the-art retrieval models demonstrate significant performance variances across tasks, with no single model excelling universally. Furthermore, comparative analyses reveal significant performance gaps between general-domain and disaster management-specific tasks, highlighting the necessity of disaster management-specific benchmarks for guiding IR model selection to support effective decision-making in disaster management scenarios. All source codes and DisastIR are available atthis https URL.

View on arXiv
@article{yin2025_2505.15856,
  title={ DisastIR: A Comprehensive Information Retrieval Benchmark for Disaster Management },
  author={ Kai Yin and Xiangjue Dong and Chengkai Liu and Lipai Huang and Yiming Xiao and Zhewei Liu and Ali Mostafavi and James Caverlee },
  journal={arXiv preprint arXiv:2505.15856},
  year={ 2025 }
}
Comments on this paper