论文标题
毒药攻击对文本数据集的有条件对抗正规化自动编码器
Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder
论文作者
论文摘要
本文展示了自然语言推断(NLI)和文本分类系统的致命脆弱性。更具体地说,我们对NLP模型提出了“后门中毒”的攻击。我们的中毒攻击利用有条件的对抗正规自动编码器(CARA)通过潜在空间中的毒药注射来生成中毒的训练样本。仅通过添加1%的毒数据,我们的实验就表明,当输入假设注入毒药签名时,受害者BERT FINETUNED分类器的预测可以转向毒靶类别,成功率> 80%,这表明NLI和文本分类系统面临巨大的安全风险。
This paper demonstrates a fatal vulnerability in natural language inference (NLI) and text classification systems. More concretely, we present a 'backdoor poisoning' attack on NLP models. Our poisoning attack utilizes conditional adversarially regularized autoencoder (CARA) to generate poisoned training samples by poison injection in latent space. Just by adding 1% poisoned data, our experiments show that a victim BERT finetuned classifier's predictions can be steered to the poison target class with success rates of >80% when the input hypothesis is injected with the poison signature, demonstrating that NLI and text classification systems face a huge security risk.