论文标题
Emofake:一个初始的情感数据集伪造音频检测
EmoFake: An Initial Dataset for Emotion Fake Audio Detection
论文作者
论文摘要
许多数据集旨在进一步开发伪造的音频检测,例如ASVSPOOF的数据集并增加挑战。但是,这些数据集并未考虑到音频的情绪已经从一个变为另一个的情况,而其他信息(例如说话者的身份和内容)保持不变。改变音频的情绪会导致语义变化。篡改语义的演讲可能会对人们的生命构成威胁。因此,本文报告了我们在开发这种情感伪造音频检测数据集方面的进展,该数据集涉及更改名为Emofake的原始音频的情绪状态。 Emofake中的假音频是由开源情感语音转换模型产生的。此外,我们提出了一种使用深度情感嵌入(GADE)来检测情绪假音频的方法,名为Graph注意网络。在此数据集上进行了一些基准实验。结果表明,我们的设计数据集对使用ASVSPOOF 2019的LA数据集进行训练的假音频检测模型构成了挑战。拟议的Gade在面对情感假音频时表现出良好的性能。
Many datasets have been designed to further the development of fake audio detection, such as datasets of the ASVspoof and ADD challenges. However, these datasets do not consider a situation that the emotion of the audio has been changed from one to another, while other information (e.g. speaker identity and content) remains the same. Changing the emotion of an audio can lead to semantic changes. Speech with tampered semantics may pose threats to people's lives. Therefore, this paper reports our progress in developing such an emotion fake audio detection dataset involving changing emotion state of the origin audio named EmoFake. The fake audio in EmoFake is generated by open source emotion voice conversion models. Furthermore, we proposed a method named Graph Attention networks using Deep Emotion embedding (GADE) for the detection of emotion fake audio. Some benchmark experiments are conducted on this dataset. The results show that our designed dataset poses a challenge to the fake audio detection model trained with the LA dataset of ASVspoof 2019. The proposed GADE shows good performance in the face of emotion fake audio.