论文标题
分配:通过分散的稀疏培训进行沟通高效的个性化联合学习
DisPFL: Towards Communication-Efficient Personalized Federated Learning via Decentralized Sparse Training
论文作者
论文摘要
提出了个性化联合学习,以通过为每个用户学习专门的量身定制的本地模型来解决客户之间的数据异质性问题。但是,现有的作品通常是以集中式的方式构建的,当发生故障或攻击中央服务器时会导致高通信压力和高脆弱性。在这项工作中,我们在一个名为dis-pfl的分散(点对点)通信协议中提出了一个新颖的个性化联合学习框架,该协议采用个性化的稀疏面具来自定义边缘的稀疏本地模型。为了进一步节省通信和计算成本,我们提出了一种分散的稀疏培训技术,这意味着DIS-PFL中的每个本地模型仅在整个本地培训和点对点通信过程中维护固定数量的活动参数。全面的实验表明,DIS-PFL可显着节省所有客户中最繁忙的节点的通信瓶颈,同时,通过更少的计算成本和通信回合,可以实现更高的模型准确性。此外,我们证明我们的方法可以轻松适应具有不同计算复杂性并实现更好个性化表现的异质本地客户。
Personalized federated learning is proposed to handle the data heterogeneity problem amongst clients by learning dedicated tailored local models for each user. However, existing works are often built in a centralized way, leading to high communication pressure and high vulnerability when a failure or an attack on the central server occurs. In this work, we propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL, which employs personalized sparse masks to customize sparse local models on the edge. To further save the communication and computation cost, we propose a decentralized sparse training technique, which means that each local model in Dis-PFL only maintains a fixed number of active parameters throughout the whole local training and peer-to-peer communication process. Comprehensive experiments demonstrate that Dis-PFL significantly saves the communication bottleneck for the busiest node among all clients and, at the same time, achieves higher model accuracy with less computation cost and communication rounds. Furthermore, we demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities and achieves better personalized performances.