ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.13468
56
0

A CGAN-LSTM-Based Framework for Time-Varying Non-Stationary Channel Modeling

3 March 2025
Keying Guo
Ruisi He
Mi Yang
Yuxin Zhang
Bo Ai
Haoxiang Zhang
Jiahui Han
Ruifeng Chen
    AI4TS
    GAN
    BDL
ArXivPDFHTML
Abstract

Time-varying non-stationary channels, with complex dynamic variations and temporal evolution characteristics, have significant challenges in channel modeling and communication system performance evaluation. Most existing methods of time-varying channel modeling focus on predicting channel state at a given moment or simulating short-term channel fluctuations, which are unable to capture the long-term evolution of the channel. This paper emphasizes the generation of long-term dynamic channel to fully capture evolution of non-stationary channel properties. The generated channel not only reflects temporal dynamics but also ensures consistent stationarity. We propose a hybrid deep learning framework that combines conditional generative adversarial networks (CGAN) with long short-term memory (LSTM) networks. A stationarity-constrained approach is designed to ensure temporal correlation of the generated time-series channel. This method can generate channel with required temporal non-stationarity. The model is validated by comparing channel statistical features, and the results show that the generated channel is in good agreement with raw channel and provides good performance in terms of non-stationarity.

View on arXiv
@article{guo2025_2503.13468,
  title={ A CGAN-LSTM-Based Framework for Time-Varying Non-Stationary Channel Modeling },
  author={ Keying Guo and Ruisi He and Mi Yang and Yuxin Zhang and Bo Ai and Haoxiang Zhang and Jiahui Han and Ruifeng Chen },
  journal={arXiv preprint arXiv:2503.13468},
  year={ 2025 }
}
Comments on this paper