论文标题

演讲者识别系统中的对抗攻击和防御措施:一项调查

Adversarial attacks and defenses in Speaker Recognition Systems: A survey

论文作者

Lan, Jiahe, Zhang, Rui, Yan, Zheng, Wang, Jie, Chen, Yu, Hou, Ronghui

论文摘要

由于易于使用远程控制和对经济友好的功能,因此在许多应用程序场景(例如智能家居和智能助理)中,演讲者的认可已在许多应用程序场景中变得非常流行。 SRSS的快速发展与机器学习的发展是密不可分的,尤其是神经网络的发展。但是,以前的工作表明,机器学习模型容易受到图像域中对抗性攻击的影响,这激发了研究人员在说话者识别系统(SRS)中探索对抗性攻击和防御。不幸的是,现有文献缺乏对该主题的彻底回顾。在本文中,我们通过对SRSS中的对抗性攻击和防御措施进行全面调查来填补这一空白。我们首先介绍了与对抗攻击有关的SRSS和概念的基础。然后,我们提出了两组标准,以分别评估SRSS中攻击方法和防御方法的性能。之后,我们提供了现有攻击方法和防御方法的分类法,并通过采用我们建议的标准进一步审查它们。最后,根据我们的审查,我们发现了一些空旷的问题,并进一步指定了许多未来的方向,以激发SRSS安全性研究。

Speaker recognition has become very popular in many application scenarios, such as smart homes and smart assistants, due to ease of use for remote control and economic-friendly features. The rapid development of SRSs is inseparable from the advancement of machine learning, especially neural networks. However, previous work has shown that machine learning models are vulnerable to adversarial attacks in the image domain, which inspired researchers to explore adversarial attacks and defenses in Speaker Recognition Systems (SRS). Unfortunately, existing literature lacks a thorough review of this topic. In this paper, we fill this gap by performing a comprehensive survey on adversarial attacks and defenses in SRSs. We first introduce the basics of SRSs and concepts related to adversarial attacks. Then, we propose two sets of criteria to evaluate the performance of attack methods and defense methods in SRSs, respectively. After that, we provide taxonomies of existing attack methods and defense methods, and further review them by employing our proposed criteria. Finally, based on our review, we find some open issues and further specify a number of future directions to motivate the research of SRSs security.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源