ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14629
7
0

VisText-Mosquito: A Multimodal Dataset and Benchmark for AI-Based Mosquito Breeding Site Detection and Reasoning

17 June 2025
Md. Adnanul Islam
Md. Faiyaz Abdullah Sayeedi
Md. Asaduzzaman Shuvo
M. R
Shahanur Rahman Bappy
Raiyan Rahman
Swakkhar Shatabda
ArXiv (abs)PDFHTML
Main:4 Pages
2 Figures
Bibliography:1 Pages
1 Tables
Abstract

Mosquito-borne diseases pose a major global health risk, requiring early detection and proactive control of breeding sites to prevent outbreaks. In this paper, we present VisText-Mosquito, a multimodal dataset that integrates visual and textual data to support automated detection, segmentation, and reasoning for mosquito breeding site analysis. The dataset includes 1,828 annotated images for object detection, 142 images for water surface segmentation, and natural language reasoning texts linked to each image. The YOLOv9s model achieves the highest precision of 0.92926 and mAP@50 of 0.92891 for object detection, while YOLOv11n-Seg reaches a segmentation precision of 0.91587 and mAP@50 of 0.79795. For reasoning generation, our fine-tuned BLIP model achieves a final loss of 0.0028, with a BLEU score of 54.7, BERTScore of 0.91, and ROUGE-L of 0.87. This dataset and model framework emphasize the theme "Prevention is Better than Cure", showcasing how AI-based detection can proactively address mosquito-borne disease risks. The dataset and implementation code are publicly available at GitHub:this https URL

View on arXiv
@article{islam2025_2506.14629,
  title={ VisText-Mosquito: A Multimodal Dataset and Benchmark for AI-Based Mosquito Breeding Site Detection and Reasoning },
  author={ Md. Adnanul Islam and Md. Faiyaz Abdullah Sayeedi and Md. Asaduzzaman Shuvo and Muhammad Ziaur Rahman and Shahanur Rahman Bappy and Raiyan Rahman and Swakkhar Shatabda },
  journal={arXiv preprint arXiv:2506.14629},
  year={ 2025 }
}
Comments on this paper