论文标题

制造的翻转:无数据的中毒联邦学习

Fabricated Flips: Poisoning Federated Learning without Data

论文作者

Huang, Jiyue, Zhao, Zilong, Chen, Lydia Y., Roos, Stefanie

论文摘要

对联邦学习(FL)的攻击可以严重降低生成的模型的质量,并限制这种新兴学习范式的实用性,从而实现本地分散学习。但是,在许多情况下,现有的非目标攻击是不实用的,因为他们认为i)攻击者知道良性客户端的每一个更新,或ii)攻击者拥有一个大型数据集来模仿良性派对的本地训练更新。在本文中,我们提出了一项无数据的非目标攻击(DFA),该攻击(DFA)合成恶意数据以制作对抗模型,而无需窃听良性客户的传播或完全需要大量特定于任务的培训数据。我们设计了DFA的两个变体,即DFA-R和DFA-G,它们在交易隐形和有效性方面有所不同。具体而言,DFA-R迭代优化了恶意数据层,以最大程度地降低全局模型所有输出的预测信心,而DFA-G通过将全局模型的输出转向特定类,从而交互训练恶意数据生成器网络。关于时尚摄影师,CIFAR-10和SVHN的实验结果表明,尽管比现有攻击所需的假设少于现有攻击,但与针对各种最先进的防御机制的不预定攻击相比,达到了相似甚至更高的攻击成功率。具体而言,他们可以在至少50%的CIFAR-10案例中逃避所有考虑的防御机制,并且通常将准确性降低了2倍以上。因此,我们设计了Refd,这是一种专门为防止无数据攻击而设计的辩护。 REDD利用参考数据集检测有偏见或信心较低的更新。通过滤除恶意更新并实现高全球模型准确性,它可以极大地改善现有防御能力

Attacks on Federated Learning (FL) can severely reduce the quality of the generated models and limit the usefulness of this emerging learning paradigm that enables on-premise decentralized learning. However, existing untargeted attacks are not practical for many scenarios as they assume that i) the attacker knows every update of benign clients, or ii) the attacker has a large dataset to locally train updates imitating benign parties. In this paper, we propose a data-free untargeted attack (DFA) that synthesizes malicious data to craft adversarial models without eavesdropping on the transmission of benign clients at all or requiring a large quantity of task-specific training data. We design two variants of DFA, namely DFA-R and DFA-G, which differ in how they trade off stealthiness and effectiveness. Specifically, DFA-R iteratively optimizes a malicious data layer to minimize the prediction confidence of all outputs of the global model, whereas DFA-G interactively trains a malicious data generator network by steering the output of the global model toward a particular class. Experimental results on Fashion-MNIST, Cifar-10, and SVHN show that DFA, despite requiring fewer assumptions than existing attacks, achieves similar or even higher attack success rate than state-of-the-art untargeted attacks against various state-of-the-art defense mechanisms. Concretely, they can evade all considered defense mechanisms in at least 50% of the cases for CIFAR-10 and often reduce the accuracy by more than a factor of 2. Consequently, we design REFD, a defense specifically crafted to protect against data-free attacks. REFD leverages a reference dataset to detect updates that are biased or have a low confidence. It greatly improves upon existing defenses by filtering out the malicious updates and achieves high global model accuracy

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源