论文标题

无晶格的MMI适应自我监督预审计的声学模型

Lattice-Free MMI Adaptation Of Self-Supervised Pretrained Acoustic Models

论文作者

Vyas, Apoorv, Madikeri, Srikanth, Bourlard, Hervé

论文摘要

在这项工作中,我们提出了无晶格MMI(LFMMI),以监督自我监督预审计的声学模型的适应。我们在千小时未转录的LiblisPeech数据中预先一个变压器模型,然后在三个不同的数据集上对LFMMI进行监督适应。我们的结果表明,通过LFMMI进行微调,我们始终在清洁和其他测试库中获得10%和35.3%的相对提高(100h),分配板上的10.8%(300h),而Swahili(38h)(38h)(38h)(38h)和4.4%的相对提高了4.3%,而与基线培训的基线培训仅相比,在Tagalog(84H)上是4.4%。

In this work, we propose lattice-free MMI (LFMMI) for supervised adaptation of self-supervised pretrained acoustic model. We pretrain a Transformer model on thousand hours of untranscribed Librispeech data followed by supervised adaptation with LFMMI on three different datasets. Our results show that fine-tuning with LFMMI, we consistently obtain relative WER improvements of 10% and 35.3% on the clean and other test sets of Librispeech (100h), 10.8% on Switchboard (300h), and 4.3% on Swahili (38h) and 4.4% on Tagalog (84h) compared to the baseline trained only with supervised data.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源