论文标题
CC-FEDAVG:计算自定义的联邦平均
CC-FedAvg: Computationally Customized Federated Averaging
论文作者
论文摘要
联合学习(FL)是一种新兴的范式,可以培训来自众多物联网(IoT)设备的分布数据的模型。它固有地假设参与者的能力统一。但是,由于不同的条件,例如不同的能源预算或执行平行的无关任务,参与者在实践中具有多种计算资源。计算预算不足的参与者必须适当地使用限制计算资源,否则他们将无法完成整个培训程序,从而导致模型性能下降。为了解决这个问题,我们提出了一种策略,用于估计本地模型,而无需进行计算密集的迭代。基于它,我们提出了计算自定义的联合平均(CC-FEDAVG),该平均值(CC-FEDAVG)允许参与者根据目前的计算预算确定是否在每个回合中执行传统的本地培训或模型估计。理论分析和详尽的实验都表明,CC-FEDAVG具有与没有资源限制的FedAvg相同的收敛率和可比性的性能。此外,CC-FEDAVG可以看作是FedAvg的计算效率版本,可保留模型性能,同时大大降低了计算开销。
Federated learning (FL) is an emerging paradigm to train model with distributed data from numerous Internet of Things (IoT) devices. It inherently assumes a uniform capacity among participants. However, due to different conditions such as differing energy budgets or executing parallel unrelated tasks, participants have diverse computational resources in practice. Participants with insufficient computation budgets must plan for the use of restricted computational resources appropriately, otherwise they would be unable to complete the entire training procedure, resulting in model performance decline. To address this issue, we propose a strategy for estimating local models without computationally intensive iterations. Based on it, we propose Computationally Customized Federated Averaging (CC-FedAvg), which allows participants to determine whether to perform traditional local training or model estimation in each round based on their current computational budgets. Both theoretical analysis and exhaustive experiments indicate that CC-FedAvg has the same convergence rate and comparable performance as FedAvg without resource constraints. Furthermore, CC-FedAvg can be viewed as a computation-efficient version of FedAvg that retains model performance while considerably lowering computation overhead.