论文标题

通过重量修剪来朝着稀疏的联合神经影像模型

Towards Sparsified Federated Neuroimaging Models via Weight Pruning

论文作者

Stripelis, Dimitris, Gupta, Umang, Dhinagar, Nikhil, Steeg, Greg Ver, Thompson, Paul, Ambite, José Luis

论文摘要

大型深度神经网络的联合培训通常可以受到限制,因为将更新与增加的模型大小进行交流的成本增加。在集中设置中设计了各种模型修剪技术,以减少推理时间。将集中的修剪技术与联合培训相结合似乎是降低通信成本的直观 - 通过在沟通步骤之前修剪模型参数。此外,在培训期间,这种渐进的模型修剪方法也可以减少培训时间/成本。为此,我们提出了FedSparsify,该公司在联合培训期间执行模型修剪。在我们在集中式和联合的设置中,对大脑年龄预测任务(从大脑MRI中估算一个人的年龄)进行了实验,我们证明,即使在具有高度异构数据分布的挑战性的联邦学习环境中,也可以将模型修剪高达95%的稀疏性,而不会影响性能。模型修剪的一个令人惊讶的好处是改善模型隐私。我们证明,具有高稀疏性的模型不太容易受到会员推理攻击,这是一种隐私攻击。

Federated training of large deep neural networks can often be restrictive due to the increasing costs of communicating the updates with increasing model sizes. Various model pruning techniques have been designed in centralized settings to reduce inference times. Combining centralized pruning techniques with federated training seems intuitive for reducing communication costs -- by pruning the model parameters right before the communication step. Moreover, such a progressive model pruning approach during training can also reduce training times/costs. To this end, we propose FedSparsify, which performs model pruning during federated training. In our experiments in centralized and federated settings on the brain age prediction task (estimating a person's age from their brain MRI), we demonstrate that models can be pruned up to 95% sparsity without affecting performance even in challenging federated learning environments with highly heterogeneous data distributions. One surprising benefit of model pruning is improved model privacy. We demonstrate that models with high sparsity are less susceptible to membership inference attacks, a type of privacy attack.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源