123
16

Deep Reinforcement Learning for Unmanned Aerial Vehicle-Assisted Vehicular Networks

Abstract

Unmanned aerial vehicles (UAVs) are envisioned to complement the 5G communication infrastructure in future smart cities. Hot spots easily appear in road intersections, where effective communication among vehicles is challenging. UAVs may serve as relays with the advantages of low price, easy deployment, line-of-sight links, and flexible mobility. In this paper, we study a UAV-assisted vehicular network where the UAV jointly adjusts its transmission control (power and channel) and 3D flight to maximize the total throughput. First, we formulate a Markov decision process (MDP) problem by modeling the mobility of the UAV/vehicles and the state transitions. Secondly, we solve the target problem using a deep reinforcement learning method under unknown or unmeasurable environment variables especially in 5G, namely, the deep deterministic policy gradient (DDPG), and propose three solutions with different control objectives. Environment variables are unknown and unmeasurable, therefore, we use a deep reinforcement learning method. Moreover, considering the energy consumption of 3D flight, we extend the proposed solutions to maximize the total throughput per energy unit by encouraging or discouraging the UAV's mobility. To achieve this goal, the DDPG framework is modified. Thirdly, in a simplified model with small state space and action space, we verify the optimality of proposed algorithms. Comparing with two baseline schemes, we demonstrate the effectiveness of proposed algorithms in a realistic model.

View on arXiv
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.