9
0

Multi-View Wireless Sensing via Conditional Generative Learning: Framework and Model Design

Abstract

In this paper, we incorporate physical knowledge into learning-based high-precision target sensing using the multi-view channel state information (CSI) between multiple base stations (BSs) and user equipment (UEs). Such kind of multi-view sensing problem can be naturally cast into a conditional generation framework. To this end, we design a bipartite neural network architecture, the first part of which uses an elaborately designed encoder to fuse the latent target features embedded in the multi-view CSI, and then the second uses them as conditioning inputs of a powerful generative model to guide the target's reconstruction. Specifically, the encoder is designed to capture the physical correlation between the CSI and the target, and also be adaptive to the numbers and positions of BS-UE pairs. Therein the view-specific nature of CSI is assimilated by introducing a spatial positional embedding scheme, which exploits the structure of electromagnetic(EM)-wave propagation channels. Finally, a conditional diffusion model with a weighted loss is employed to generate the target's point cloud from the fused features. Extensive numerical results demonstrate that the proposed generative multi-view (Gen-MV) sensing framework exhibits excellent flexibility and significant performance improvement on the reconstruction quality of target's shape and EM properties.

View on arXiv
@article{xing2025_2505.12664,
  title={ Multi-View Wireless Sensing via Conditional Generative Learning: Framework and Model Design },
  author={ Ziqing Xing and Zhaoyang Zhang and Zirui Chen and Hongning Ruan and Zhaohui Yang },
  journal={arXiv preprint arXiv:2505.12664},
  year={ 2025 }
}
Comments on this paper