论文标题
大力神:提高保护隐私的联合学习的表现
Hercules: Boosting the Performance of Privacy-preserving Federated Learning
论文作者
论文摘要
在本文中,我们解决了使用$ n $用户使用隐私的联合神经网络培训的问题。我们提出了Hercules,这是一个高效且高精度的培训框架,可以容忍最高$ N-1 $的用户的勾结。大力神遵循Sav等人提出的Poseidon框架。 (NDSS'21),但具有以下贡献的性能质量飞跃:(i)我们为矩阵操作设计了一种新颖的平行同构计算方法,该方法可以通过Ciphertexts来快速单个指令和多个数据(SIMD)操作。对于两个$ h \ times h $ dimensional矩阵的乘法,我们的方法将计算复杂度从$ o(h^3)$降低到$ o(h)$。这大大提高了神经网络的训练效率,因为密文计算以卷积操作为主。 (ii)我们基于复合多项式近似,在符号函数上提出了有效的近似值。它用于近似于最佳的渐近复杂性,近似于非物质函数(即relu和max)。在各种基准数据集(BCW,ESR,Credit,MNIST,SVHN,CIFAR-10和CIFAR-100)上进行的广泛实验表明,与Poseidon相比,Hercules的模型准确性提高了高达4%,并且计算和通信成本的最高可减少60美元。
In this paper, we address the problem of privacy-preserving federated neural network training with $N$ users. We present Hercules, an efficient and high-precision training framework that can tolerate collusion of up to $N-1$ users. Hercules follows the POSEIDON framework proposed by Sav et al. (NDSS'21), but makes a qualitative leap in performance with the following contributions: (i) we design a novel parallel homomorphic computation method for matrix operations, which enables fast Single Instruction and Multiple Data (SIMD) operations over ciphertexts. For the multiplication of two $h\times h$ dimensional matrices, our method reduces the computation complexity from $O(h^3)$ to $O(h)$. This greatly improves the training efficiency of the neural network since the ciphertext computation is dominated by the convolution operations; (ii) we present an efficient approximation on the sign function based on the composite polynomial approximation. It is used to approximate non-polynomial functions (i.e., ReLU and max), with the optimal asymptotic complexity. Extensive experiments on various benchmark datasets (BCW, ESR, CREDIT, MNIST, SVHN, CIFAR-10 and CIFAR-100) show that compared with POSEIDON, Hercules obtains up to 4% increase in model accuracy, and up to 60$\times$ reduction in the computation and communication cost.