论文标题

通过非共同的无热共识通过分散的联邦学习

Decentralized Federated Learning via Non-Coherent Over-the-Air Consensus

论文作者

Michelusi, Nicolò

论文摘要

本文介绍了NCOTA-DGD,这是一种分散的梯度下降(DGD)算法,该算法将局部梯度下降与一种新型的非固定空中(NCOTA)共识方案相结合,以解决无线连接系统上的分布式机器学习问题。 NCOTA-DGD利用无线通道的波形叠加属性:通过将局部优化信号映射到前序序列的混合物中,并通过接收器的非固定剂组合来实现半双层约束的同时传输。 NCOTA-DGD在发射机和接收器上无通道状态信息运行,并利用平均通道路径来混合信号,而无需明确了解混合权重(通常在基于共识的优化算法中已知)。从理论和数字上讲,对于具有固定共识和学习步骤的平滑且强烈的问题的问题,NCOTA-DGD的更新在欧几里得距离中收敛到全球最佳距离,并使用$ \ Mathcal o(k^{ - 1/4})$ $ k $ k $ iterations $(k^{ - 1/4})$。 NCOTA-DGD在逻辑回归问题上进行了数值评估,比在数字和模拟正交通道上的经典DGD算法的实现比实现经典DGD算法更快。

This paper presents NCOTA-DGD, a Decentralized Gradient Descent (DGD) algorithm that combines local gradient descent with a novel Non-Coherent Over-The-Air (NCOTA) consensus scheme to solve distributed machine-learning problems over wirelessly-connected systems. NCOTA-DGD leverages the waveform superposition properties of the wireless channels: it enables simultaneous transmissions under half-duplex constraints, by mapping local optimization signals to a mixture of preamble sequences, and consensus via non-coherent combining at the receivers. NCOTA-DGD operates without channel state information at transmitters and receivers, and leverages the average channel pathloss to mix signals, without explicit knowledge of the mixing weights (typically known in consensus-based optimization algorithms). It is shown both theoretically and numerically that, for smooth and strongly-convex problems with fixed consensus and learning stepsizes, the updates of NCOTA-DGD converge in Euclidean distance to the global optimum with rate $\mathcal O(K^{-1/4})$ for a target of $K$ iterations. NCOTA-DGD is evaluated numerically over a logistic regression problem, showing faster convergence vis-à-vis running time than implementations of the classical DGD algorithm over digital and analog orthogonal channels.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源