ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.00793
33
84

A Parallel Down-Up Fusion Network for Salient Object Detection in Optical Remote Sensing Images

2 October 2020
Chongyi Li
Runmin Cong
Chunle Guo
Hua Li
Chunjie Zhang
Feng Zheng
Yao-Min Zhao
ArXivPDFHTML
Abstract

The diverse spatial resolutions, various object types, scales and orientations, and cluttered backgrounds in optical remote sensing images (RSIs) challenge the current salient object detection (SOD) approaches. It is commonly unsatisfactory to directly employ the SOD approaches designed for nature scene images (NSIs) to RSIs. In this paper, we propose a novel Parallel Down-up Fusion network (PDF-Net) for SOD in optical RSIs, which takes full advantage of the in-path low- and high-level features and cross-path multi-resolution features to distinguish diversely scaled salient objects and suppress the cluttered backgrounds. To be specific, keeping a key observation that the salient objects still are salient no matter the resolutions of images are in mind, the PDF-Net takes successive down-sampling to form five parallel paths and perceive scaled salient objects that are commonly existed in optical RSIs. Meanwhile, we adopt the dense connections to take advantage of both low- and high-level information in the same path and build up the relations of cross paths, which explicitly yield strong feature representations. At last, we fuse the multiple-resolution features in parallel paths to combine the benefits of the features with different resolutions, i.e., the high-resolution feature consisting of complete structure and clear details while the low-resolution features highlighting the scaled salient objects. Extensive experiments on the ORSSD dataset demonstrate that the proposed network is superior to the state-of-the-art approaches both qualitatively and quantitatively.

View on arXiv
Comments on this paper