论文标题

在上行链路NOMA中具有梯度压缩的自适应联合学习

Adaptive Federated Learning With Gradient Compression in Uplink NOMA

论文作者

Sun, Haijian, Ma, Xiang, Hu, Rose Qingyang

论文摘要

联合学习(FL)是一种新兴的机器学习技术,该技术汇总了来自大量分布式设备的模型属性。节能和隐私保护等几个独特功能使得FL成为功能限制和隐私敏感设备的高度前景的学习方法。尽管分布式计算可以降低需要上传的信息量,但FL中的模型更新仍然可以体验性能瓶颈,尤其是对于通过无线连接更新而言。在这项工作中,我们通过使用实际无线链接连接到参数服务器(PS)的移动边缘设备进行了FL更新的性能,其中从用户到PS的上行链路更新的容量非常有限。与现有作品不同,我们将非正交多访问(NOMA)以及无线上行链路中的梯度压缩进行应用。仿真结果表明,我们提出的方案可以显着降低聚合潜伏期,同时达到相似的准确性。

Federated learning (FL) is an emerging machine learning technique that aggregates model attributes from a large number of distributed devices. Several unique features such as energy saving and privacy preserving make FL a highly promising learning approach for power-limited and privacy sensitive devices. Although distributed computing can lower down the information amount that needs to be uploaded, model updates in FL can still experience performance bottleneck, especially for updates via wireless connections. In this work, we investigate the performance of FL update with mobile edge devices that are connected to the parameter server (PS) with practical wireless links, where uplink update from user to PS has very limited capacity. Different from the existing works, we apply non-orthogonal multiple access (NOMA) together with gradient compression in the wireless uplink. Simulation results show that our proposed scheme can significantly reduce aggregation latency while achieving similar accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源