论文标题

使用经典和量子增强机器的防御对抗攻击的防御

Defence against adversarial attacks using classical and quantum-enhanced Boltzmann machines

论文作者

Kehoe, Aidan, Wittek, Peter, Xue, Yanbo, Pozas-Kerstjens, Alejandro

论文摘要

我们为对歧视性算法的对抗性攻击提供了强有力的防御。神经网络自然容易受到导致错误预测的输入数据中量身定制的扰动。相反,生成模型试图学习数据集的基础分布,从而使它们对小扰动更加强大。我们将Boltzmann机器用于歧视目的作为抗攻击的分类器,并将其与标准的最新对抗防御措施进行比较。我们发现在MNIST数据集上使用Boltzmann机器的攻击量从5%到72%。此外,我们还通过D-Wave 2000Q退火器的量子增强采样来补充训练,发现结果与经典技术相当,并且在某些情况下进行了边际改进。这些结果强调了概率方法在构建神经网络方面的相关性,并突出了一种实用相关性的新方案,即使量子计算机及其有限的硬件粘纤维也可以提供比古典计算机的优势。这项工作致力于记忆彼得·维特克(Peter Wittek)。

We provide a robust defence to adversarial attacks on discriminative algorithms. Neural networks are naturally vulnerable to small, tailored perturbations in the input data that lead to wrong predictions. On the contrary, generative models attempt to learn the distribution underlying a dataset, making them inherently more robust to small perturbations. We use Boltzmann machines for discrimination purposes as attack-resistant classifiers, and compare them against standard state-of-the-art adversarial defences. We find improvements ranging from 5% to 72% against attacks with Boltzmann machines on the MNIST dataset. We furthermore complement the training with quantum-enhanced sampling from the D-Wave 2000Q annealer, finding results comparable with classical techniques and with marginal improvements in some cases. These results underline the relevance of probabilistic methods in constructing neural networks and highlight a novel scenario of practical relevance where quantum computers, even with limited hardware capabilites, could provide advantages over classical computers. This work is dedicated to the memory of Peter Wittek.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源