论文标题
开发具有自定义功能的高性能混合模型的RNN-T模型
Developing RNN-T Models Surpassing High-Performance Hybrid Models with Customization Capability
论文作者
论文摘要
由于其流媒体性质,经常性的神经网络传感器(RNN-T)是一个非常有前途的端到端(E2E)模型,可以取代流行的自动语音识别的流行混合模型。在本文中,我们描述了我们最近对RNN-T模型的开发,该模型在训练期间的GPU记忆消耗减少,更好的初始化策略以及具有未来LookAhead的高级编码器建模。当使用Microsoft的6.5万小时匿名培训数据进行培训时,开发的RNN-T模型超过了训练有素的混合模型,既具有更好的识别精度又降低了潜伏期。我们进一步研究了如何将RNN-T模型自定义为新领域,这对于将E2E模型部署到实际场景很重要。通过比较利用新域中仅文本数据的几种方法,我们发现使用从域特异性文本生成的文本到语音的更新RNN-T的预测和联合网络是最有效的。
Because of its streaming nature, recurrent neural network transducer (RNN-T) is a very promising end-to-end (E2E) model that may replace the popular hybrid model for automatic speech recognition. In this paper, we describe our recent development of RNN-T models with reduced GPU memory consumption during training, better initialization strategy, and advanced encoder modeling with future lookahead. When trained with Microsoft's 65 thousand hours of anonymized training data, the developed RNN-T model surpasses a very well trained hybrid model with both better recognition accuracy and lower latency. We further study how to customize RNN-T models to a new domain, which is important for deploying E2E models to practical scenarios. By comparing several methods leveraging text-only data in the new domain, we found that updating RNN-T's prediction and joint networks using text-to-speech generated from domain-specific text is the most effective.