ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.22150
17
0

Improving Brain-to-Image Reconstruction via Fine-Grained Text Bridging

28 May 2025
Runze Xia
Shuo Feng
Renzhi Wang
Congchi Yin
Xuyun Wen
Piji Li
    DiffM
ArXivPDFHTML
Abstract

Brain-to-Image reconstruction aims to recover visual stimuli perceived by humans from brain activity. However, the reconstructed visual stimuli often missing details and semantic inconsistencies, which may be attributed to insufficient semantic information. To address this issue, we propose an approach named Fine-grained Brain-to-Image reconstruction (FgB2I), which employs fine-grained text as bridge to improve image reconstruction. FgB2I comprises three key stages: detail enhancement, decoding fine-grained text descriptions, and text-bridged brain-to-image reconstruction. In the detail-enhancement stage, we leverage large vision-language models to generate fine-grained captions for visual stimuli and experimentally validate its importance. We propose three reward metrics (object accuracy, text-image semantic similarity, and image-image semantic similarity) to guide the language model in decoding fine-grained text descriptions from fMRI signals. The fine-grained text descriptions can be integrated into existing reconstruction methods to achieve fine-grained Brain-to-Image reconstruction.

View on arXiv
@article{xia2025_2505.22150,
  title={ Improving Brain-to-Image Reconstruction via Fine-Grained Text Bridging },
  author={ Runze Xia and Shuo Feng and Renzhi Wang and Congchi Yin and Xuyun Wen and Piji Li },
  journal={arXiv preprint arXiv:2505.22150},
  year={ 2025 }
}
Comments on this paper