论文标题
用于服务器放置和工作负载分配的强化学习框架在多访问边缘计算中
Reinforcement Learning Framework for Server Placement and Workload Allocation in Multi-Access Edge Computing
论文作者
论文摘要
云计算是提供分布式计算功率的可靠解决方案。但是,对于5G和6G网络中IoT设备生成的大量数据,实时响应仍然具有挑战性。因此,多访问边缘计算(MEC)包括在最终用户的距离靠近边缘服务器的情况下,除了较高的处理能力外,越来越多地成为现代应用程序成功的至关重要因素。本文解决了最小化网络延迟的问题,即MEC的主要目标,以及提供最低成本的MEC设计的Edge服务器数量。该MEC设计由Edge服务器的放置和基站分配组成,这使其成为联合组合优化问题(COP)。最近,加强学习(RL)显示了警察的有希望的结果。但是,当状态和行动空间较大时,使用RL对现实世界中的问题进行建模仍然需要调查。我们提出了一个新颖的RL框架,具有有效的表示和建模,以解决基础马尔可夫决策过程(MDP)设计中的状态空间,行动空间和惩罚函数,以解决我们的问题。
Cloud computing is a reliable solution to provide distributed computation power. However, real-time response is still challenging regarding the enormous amount of data generated by the IoT devices in 5G and 6G networks. Thus, multi-access edge computing (MEC), which consists of distributing the edge servers in the proximity of end-users to have low latency besides the higher processing power, is increasingly becoming a vital factor for the success of modern applications. This paper addresses the problem of minimizing both, the network delay, which is the main objective of MEC, and the number of edge servers to provide a MEC design with minimum cost. This MEC design consists of edge servers placement and base stations allocation, which makes it a joint combinatorial optimization problem (COP). Recently, reinforcement learning (RL) has shown promising results for COPs. However, modeling real-world problems using RL when the state and action spaces are large still needs investigation. We propose a novel RL framework with an efficient representation and modeling of the state space, action space and the penalty function in the design of the underlying Markov Decision Process (MDP) for solving our problem.