论文标题
通过无线网络进行绿色,量化的联合学习:节能设计
Green, Quantized Federated Learning over Wireless Networks: An Energy-Efficient Design
论文作者
论文摘要
在本文中,提出了一个绿色量化的FL框架,该框架在本地培训和上行链路传输中代表具有有限精度水平的数据。在这里,通过使用量化的神经网络(QNN)来捕获有限的精度水平,该神经网络(QNN)以固定精确格式量化权重和激活。在所考虑的FL模型中,每个设备训练其QNN并将量化的训练结果传输到基站。严格得出了用于本地训练的能量模型和带有量化的传播的能量模型。为了同时最大程度地减少能源消耗和通信次数,就本地迭代的数量,选定的设备数量以及本地训练和传输的精确水平而提出了多目标优化问题,同时确保在目标准确性约束下确保收敛。为了解决此问题,相对于系统控制变量分析得出所提出的FL系统的收敛速率。然后,将问题的帕累托边界表征为使用正常边界检查方法提供有效的解决方案。通过使用NASH讨价还价解决方案并分析派生的收敛速度,可以从实现目标准确性的同时,在实现目标准确性的同时平衡权衡取舍的设计见解。模拟结果表明,与基线FL算法相比,所提出的FL框架可以减少能源消耗,直到收敛到70 \%,该基线FL算法代表具有完全精确的数据而不会损害收敛速率。
In this paper, a green-quantized FL framework, which represents data with a finite precision level in both local training and uplink transmission, is proposed. Here, the finite precision level is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format. In the considered FL model, each device trains its QNN and transmits a quantized training result to the base station. Energy models for the local training and the transmission with quantization are rigorously derived. To minimize the energy consumption and the number of communication rounds simultaneously, a multi-objective optimization problem is formulated with respect to the number of local iterations, the number of selected devices, and the precision levels for both local training and transmission while ensuring convergence under a target accuracy constraint. To solve this problem, the convergence rate of the proposed FL system is analytically derived with respect to the system control variables. Then, the Pareto boundary of the problem is characterized to provide efficient solutions using the normal boundary inspection method. Design insights on balancing the tradeoff between the two objectives while achieving a target accuracy are drawn from using the Nash bargaining solution and analyzing the derived convergence rate. Simulation results show that the proposed FL framework can reduce energy consumption until convergence by up to 70\% compared to a baseline FL algorithm that represents data with full precision without damaging the convergence rate.