论文标题

保护垂直联合神经网络的前向聚合

Secure Forward Aggregation for Vertical Federated Neural Networks

论文作者

Cai, Shuowei, Chai, Di, Yang, Liu, Zhang, Junxue, Jin, Yilun, Wang, Leye, Guo, Kun, Chen, Kai

论文摘要

垂直联合学习(VFL)引起了很多关注,因为它可以以隐私的方式实现跨核数据合作。虽然大多数研究在VFL方面都集中在线性和树模型上,但在VFL中并未对深层模型(例如,神经网络)进行很好的研究。在本文中,我们专注于Splitnn,这是VFL中著名的神经网络框架,并确定数据安全性和模型性能之间的权衡。简而言之,SplitNN通过交换梯度和转换数据来训练模型。一方面,SplitNN遭受了模型性能的损失,因为多方使用转换的数据而不是原始数据共同训练模型,并且丢弃了大量的低级特征信息。另一方面,通过在splitnn中的下层聚集来增加模型性能的天真解决方案(即,数据的变化较小,并且保留了更多的低级功能),使原始数据易受推理攻击的影响。为了减轻上述权衡,我们在VFL中提出了一个新的神经网络协议,称为安全远射汇总(SFA)。它改变了汇总转换数据并采用可移动掩码以保护原始数据的方式。实验结果表明,具有SFA的网络同时实现了数据安全性和高模型性能。

Vertical federated learning (VFL) is attracting much attention because it enables cross-silo data cooperation in a privacy-preserving manner. While most research works in VFL focus on linear and tree models, deep models (e.g., neural networks) are not well studied in VFL. In this paper, we focus on SplitNN, a well-known neural network framework in VFL, and identify a trade-off between data security and model performance in SplitNN. Briefly, SplitNN trains the model by exchanging gradients and transformed data. On the one hand, SplitNN suffers from the loss of model performance since multiply parties jointly train the model using transformed data instead of raw data, and a large amount of low-level feature information is discarded. On the other hand, a naive solution of increasing the model performance through aggregating at lower layers in SplitNN (i.e., the data is less transformed and more low-level feature is preserved) makes raw data vulnerable to inference attacks. To mitigate the above trade-off, we propose a new neural network protocol in VFL called Security Forward Aggregation (SFA). It changes the way of aggregating the transformed data and adopts removable masks to protect the raw data. Experiment results show that networks with SFA achieve both data security and high model performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源