论文标题
对网络物理关键基础设施中差异私有联盟学习的对抗分析
Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures
论文作者
论文摘要
联合学习(FL)在网络物理关键基础设施中进行数据驱动分析变得越来越流行。由于FL流程可能涉及客户的机密信息,因此最近提出了差异隐私(DP),以使其免受对抗推断的保护。但是,我们发现,尽管DP极大地减轻了隐私问题,但额外的DP-Noise对FL中的模型中毒打开了新的威胁。但是,文献中几乎没有努力研究这种对DP噪声的对抗性剥削。为了克服这一差距,在本文中,我们提出了一种新型的自适应模型中毒技术α-mpelm},攻击者可以通过该技术利用额外的DP-noise来逃避最新的异常检测技术并防止FL模型的最佳收敛性。我们在检测准确性和验证损失方面评估了对最先进的异常检测方法的拟议攻击。我们提出的α-MPELM攻击的主要意义在于,它可将最新的异常检测准确性降低6.8%以用于范围检测,为12.6%用于准确性检测,而用于混合检测的13.8%。此外,我们提出了一个基于增强学习的DP级别选择过程,以捍卫α-MPELM攻击。实验结果证实,我们的防御机制无需人工操纵即可融合最佳的隐私政策。
Federated Learning (FL) has become increasingly popular to perform data-driven analysis in cyber-physical critical infrastructures. Since the FL process may involve the client's confidential information, Differential Privacy (DP) has been proposed lately to secure it from adversarial inference. However, we find that while DP greatly alleviates the privacy concerns, the additional DP-noise opens a new threat for model poisoning in FL. Nonetheless, very little effort has been made in the literature to investigate this adversarial exploitation of the DP-noise. To overcome this gap, in this paper, we present a novel adaptive model poisoning technique α-MPELM} through which an attacker can exploit the additional DP-noise to evade the state-of-the-art anomaly detection techniques and prevent optimal convergence of the FL model. We evaluate our proposed attack on the state-of-the-art anomaly detection approaches in terms of detection accuracy and validation loss. The main significance of our proposed α-MPELM attack is that it reduces the state-of-the-art anomaly detection accuracy by 6.8% for norm detection, 12.6% for accuracy detection, and 13.8% for mix detection. Furthermore, we propose a Reinforcement Learning-based DP level selection process to defend α-MPELM attack. The experimental results confirm that our defense mechanism converges to an optimal privacy policy without human maneuver.