论文标题

使用适配器的预训练变压器模型的参数有效传输学习用于扬声器验证

Parameter-efficient transfer learning of pre-trained Transformer models for speaker verification using adapters

论文作者

Peng, Junyi, Stafylakis, Themos, Gu, Rongzhi, Plchot, Oldřich, Mošner, Ladislav, Burget, Lukáš, Černocký, Jan

论文摘要

最近,由于他们在各种下游任务中取得了巨大的成功,预先训练的变压器模型对语音处理领域的兴趣不大。但是,大多数微调方法都会更新预训练模型的所有参数,随着模型大小的增长,这些参数变得越来越高,有时会导致在小数据集上过度拟合。在本文中,我们对应用参数有效传输学习(PETL)方法进行了全面分析,以减少适应说话者验证任务所需的可学习参数。具体而言,在微调过程中,预训练的模型被冷冻,并且在每个变压器块中插入的轻质模块都是可训练的(一种称为适配器的方法)。此外,为了在跨语言的低资源场景中提高性能,在直接将其直接在小数据集上进行微调之前,在大型中间数据集中进一步调整了变压器模型。随着更新参数的少于4%,(我们提出的)基于PETL的方法可以通过完整的微调方法(Vox1-O:0.55%,Vox1-E:0.82%,Vox1-H:1.73%)实现可比较的性能。

Recently, the pre-trained Transformer models have received a rising interest in the field of speech processing thanks to their great success in various downstream tasks. However, most fine-tuning approaches update all the parameters of the pre-trained model, which becomes prohibitive as the model size grows and sometimes results in overfitting on small datasets. In this paper, we conduct a comprehensive analysis of applying parameter-efficient transfer learning (PETL) methods to reduce the required learnable parameters for adapting to speaker verification tasks. Specifically, during the fine-tuning process, the pre-trained models are frozen, and only lightweight modules inserted in each Transformer block are trainable (a method known as adapters). Moreover, to boost the performance in a cross-language low-resource scenario, the Transformer model is further tuned on a large intermediate dataset before directly fine-tuning it on a small dataset. With updating fewer than 4% of parameters, (our proposed) PETL-based methods achieve comparable performances with full fine-tuning methods (Vox1-O: 0.55%, Vox1-E: 0.82%, Vox1-H:1.73%).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源