论文标题

关于6G中超级可靠和低延迟通信的教程:将领域知识整合到深度学习中

A Tutorial on Ultra-Reliable and Low-Latency Communications in 6G: Integrating Domain Knowledge into Deep Learning

论文作者

She, Changyang, Sun, Chengjian, Gu, Zhouyou, Li, Yonghui, Yang, Chenyang, Poor, H. Vincent, Vucetic, Branka

论文摘要

作为移动通信网络的第5和第六代(6G)的关键通信方案之一,超级可靠和低延迟通信(URLLC)对于开发各种新兴任务 - 任务 - 关键任务应用程序的开发将是核心。最新的移动通信系统无法满足URLLC的端到端延迟和整体可靠性要求。特别是,缺乏考虑不确定性下的潜伏期,可靠性,可用性,可伸缩性和决策的整体框架。在最近深层神经网络中的突破性的推动下,深度学习算法被认为是在未来的6G网络中为URLLC开发启用技术的有前途的方法。本教程说明了如何将通信和网络的域知识(模型,分析工具和优化框架)集成到URLLC的各种深度学习算法中。我们首先提供URLLC的一些背景,并为6G审查有希望的网络体系结构和深度学习框架。为了更好地说明如何通过域知识改善学习算法,我们为URLLC重新访问了基于模型的分析工具和跨层优化框架。在此之后,我们研究了在URLLC中应用监督/无监督的深度学习和深度强化学习的潜力,并总结了相关的开放问题。最后,我们提供模拟和实验结果,以验证不同学习算法的有效性并讨论未来的方向。

As one of the key communication scenarios in the 5th and also the 6th generation (6G) of mobile communication networks, ultra-reliable and low-latency communications (URLLC) will be central for the development of various emerging mission-critical applications. State-of-the-art mobile communication systems do not fulfill the end-to-end delay and overall reliability requirements of URLLC. In particular, a holistic framework that takes into account latency, reliability, availability, scalability, and decision making under uncertainty is lacking. Driven by recent breakthroughs in deep neural networks, deep learning algorithms have been considered as promising ways of developing enabling technologies for URLLC in future 6G networks. This tutorial illustrates how domain knowledge (models, analytical tools, and optimization frameworks) of communications and networking can be integrated into different kinds of deep learning algorithms for URLLC. We first provide some background of URLLC and review promising network architectures and deep learning frameworks for 6G. To better illustrate how to improve learning algorithms with domain knowledge, we revisit model-based analytical tools and cross-layer optimization frameworks for URLLC. Following that, we examine the potential of applying supervised/unsupervised deep learning and deep reinforcement learning in URLLC and summarize related open problems. Finally, we provide simulation and experimental results to validate the effectiveness of different learning algorithms and discuss future directions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源