ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2407.09509
24
1

Brain Dialogue Interface (BDI): A User-Friendly fMRI Model for Interactive Brain Decoding

17 June 2024
Heng-Chiao Huang
Lin Zhao
Zihao Wu
Xiaowei Yu
Jing Zhang
Xintao Hu
Dajiang Zhu
Tianming Liu
ArXivPDFHTML
Abstract

Brain decoding techniques are essential for understanding the neurocognitive system. Although numerous methods have been introduced in this field, accurately aligning complex external stimuli with brain activities remains a formidable challenge. To alleviate alignment difficulties, many studies have simplified their models by employing single-task paradigms and establishing direct links between brain/world through classification strategies. Despite improvements in decoding accuracy, this strategy frequently encounters issues with generality when adapting these models to various task paradigms. To address this issue, this study introduces a user-friendly decoding model that enables dynamic communication with the brain, as opposed to the static decoding approaches utilized by traditional studies. The model functions as a brain simulator, allowing for interactive engagement with the brain and enabling the decoding of a subject's experiences through dialogue-like queries. Uniquely, our model is trained in a completely unsupervised and task-free manner. Our experiments demonstrate the feasibility and versatility of our proposed method. Notably, our model demonstrates exceptional capabilities in signal compression, successfully representing the entire brain signal of approximately 185,751 voxels with just 32 signals. Furthermore, we show how our model can integrate seamlessly with multimodal models, thus enhancing the potential for controlling brain decoding through textual or image inputs.

View on arXiv
Comments on this paper