论文标题

完成:用于联合边缘学习的分布式牛顿型方法

DONE: Distributed Approximate Newton-type Method for Federated Edge Learning

论文作者

Dinh, Canh T., Tran, Nguyen H., Nguyen, Tuan Dung, Bao, Wei, Balef, Amir Rezaei, Zhou, Bing B., Zomaya, Albert Y.

论文摘要

将分布式机器学习应用于边缘计算,形成联合边缘学习的兴趣越来越大。联合边缘学习面对非i.i.d。以及异质数据以及边缘工人之间的通信,可能是通过遥远的位置和不稳定的无线网络的交流,比其本地计算开销更为昂贵。在这项工作中,我们建议完成,分布式近似牛顿型算法,具有快速收敛速率,用于沟通效率高效的联合边缘学习。首先,具有强烈的凸和平滑损耗函数,使用每个边缘工人上的经典richardson迭代以分布式方式近似牛顿方向。其次,我们证明完成了线性二次收敛并分析其通信复杂性。最后,非I.I.D的实验结果。并且异质数据显示,完成的性能与牛顿的方法相当。值得注意的是,与分布式梯度下降相比,完成的沟通迭代次数较少,并且在非二次损失函数的情况下,分布式梯度下降效果优于Dane和FedL,最先进的方法。

There is growing interest in applying distributed machine learning to edge computing, forming federated edge learning. Federated edge learning faces non-i.i.d. and heterogeneous data, and the communication between edge workers, possibly through distant locations and with unstable wireless networks, is more costly than their local computational overhead. In this work, we propose DONE, a distributed approximate Newton-type algorithm with fast convergence rate for communication-efficient federated edge learning. First, with strongly convex and smooth loss functions, DONE approximates the Newton direction in a distributed manner using the classical Richardson iteration on each edge worker. Second, we prove that DONE has linear-quadratic convergence and analyze its communication complexities. Finally, the experimental results with non-i.i.d. and heterogeneous data show that DONE attains a comparable performance to the Newton's method. Notably, DONE requires fewer communication iterations compared to distributed gradient descent and outperforms DANE and FEDL, state-of-the-art approaches, in the case of non-quadratic loss functions.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源