ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.04354
15
9

Feedback Refined Local-Global Network for Super-Resolution of Hyperspectral Imagery

7 March 2021
Zhenjie Tang
Qingyu Xu
Zhenwei Shi
Bin Pan
    SupR
ArXivPDFHTML
Abstract

With the development of deep learning technology, multi-spectral image super-resolution methods based on convolutional neural network have recently achieved great progress. However, the single hyperspectral image super-resolution remains a challenging problem due to the high-dimensional and complex spectral characteristics of hyperspectral data, which make it difficult to simultaneously capture spatial and spectral information. To deal with this issue, we propose a novel Feedback Refined Local-Global Network (FRLGN) for the super-resolution of hyperspectral image. To be specific, we develop a new Feedback Structure and a Local-Global Spectral Block to alleviate the difficulty in spatial and spectral feature extraction. The Feedback Structure can transfer the high-level information to guide the generation process of low-level feature, which is achieved by a recurrent structure with finite unfoldings. Furthermore, in order to effectively use the high-level information passed back, a Local-Global Spectral Block is constructed to handle the feedback connections. The Local-Global Spectral Block utilizes the feedback high-level information to correct the low-level feature from local spectral bands and generates powerful high-level representations among global spectral bands. By incorporating the Feedback Structure and Local-Global Spectral Block, the FRLGN can fully exploit spatial-spectral correlations among spectral bands and gradually reconstruct high-resolution hyperspectral images. The source code of FRLGN is available at https://github.com/tangzhenjie/FRLGN.

View on arXiv
Comments on this paper