ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.16739
120
99
v1v2v3v4 (latest)

Pushing Large Language Models to the 6G Edge: Vision, Challenges, and Opportunities

28 September 2023
Zhengyi Lin
Guanqiao Qu
Qiyuan Chen
Randy Sarayar
Zhe Chen
Kaibin Huang
ArXiv (abs)PDFHTML
Main:6 Pages
5 Figures
Bibliography:1 Pages
Abstract

Large language models (LLMs), which have shown remarkable capabilities, are revolutionizing AI development and potentially shaping our future. However, given their multimodality, the status quo cloud-based deployment faces some critical challenges: 1) long response time; 2) high bandwidth costs; and 3) the violation of data privacy. 6G mobile edge computing (MEC) systems may resolve these pressing issues. In this article, we explore the potential of deploying LLMs at the 6G edge. We start by introducing killer applications powered by multimodal LLMs, including robotics and healthcare, to highlight the need for deploying LLMs in the vicinity of end users. Then, we identify the critical challenges for LLM deployment at the edge and envision the 6G MEC architecture for LLMs. Furthermore, we delve into two design aspects, i.e., edge training and edge inference for LLMs. In both aspects, considering the inherent resource limitations at the edge, we discuss various cutting-edge techniques, including split learning/inference, parameter-efficient fine-tuning, quantization, and parameter-sharing inference, to facilitate the efficient deployment of LLMs. This article serves as a position paper for thoroughly identifying the motivation, challenges, and pathway for empowering LLMs at the 6G edge.

View on arXiv
@article{lin2025_2309.16739,
  title={ Pushing Large Language Models to the 6G Edge: Vision, Challenges, and Opportunities },
  author={ Zheng Lin and Guanqiao Qu and Qiyuan Chen and Xianhao Chen and Zhe Chen and Kaibin Huang },
  journal={arXiv preprint arXiv:2309.16739},
  year={ 2025 }
}
Comments on this paper