论文标题
让您的敌人愚弄:近端梯度分裂学习,以防御模型反转攻击IOMT数据
Get your Foes Fooled: Proximal Gradient Split Learning for Defense against Model Inversion Attacks on IoMT data
论文作者
论文摘要
在过去的十年中,人工智能(AI)迅速采用了医学事物(IOMT)生态系统中的深度学习网络。但是,最近已经显示,深度学习网络可以通过对抗攻击不仅使IOMT容易受到数据盗窃的影响,而且可以操纵医学诊断。现有的研究考虑在原始IOMT数据或模型参数中添加噪声,这些数据不仅降低了有关医学推论的整体绩效,而且对梯度方法的深层泄漏之类的效果也无效。在这项工作中,我们提出了针对模型反演攻击的防御方法近端梯度分裂学习(PSGL)方法。提出的方法在客户端进行深度神经网络培训过程时会故意攻击IOMT数据。我们建议使用近端梯度方法来恢复梯度图和决策级融合策略以提高识别性能。广泛的分析表明,PGSL不仅为模型反转攻击提供有效的防御机制,而且有助于改善公开可用数据集的识别性能。我们报告了14.0 $ \%$,17.9 $ \%$,和36.9 $ \%$ $ \%$的准确性分别比重建和对抗性攻击的图像获得。
The past decade has seen a rapid adoption of Artificial Intelligence (AI), specifically the deep learning networks, in Internet of Medical Things (IoMT) ecosystem. However, it has been shown recently that the deep learning networks can be exploited by adversarial attacks that not only make IoMT vulnerable to the data theft but also to the manipulation of medical diagnosis. The existing studies consider adding noise to the raw IoMT data or model parameters which not only reduces the overall performance concerning medical inferences but also is ineffective to the likes of deep leakage from gradients method. In this work, we propose proximal gradient split learning (PSGL) method for defense against the model inversion attacks. The proposed method intentionally attacks the IoMT data when undergoing the deep neural network training process at client side. We propose the use of proximal gradient method to recover gradient maps and a decision-level fusion strategy to improve the recognition performance. Extensive analysis show that the PGSL not only provides effective defense mechanism against the model inversion attacks but also helps in improving the recognition performance on publicly available datasets. We report 14.0$\%$, 17.9$\%$, and 36.9$\%$ gains in accuracy over reconstructed and adversarial attacked images, respectively.