论文标题
使用预训练的语言模型在日本TTS前端使用预训练的语言模型的多人歧义和重音预测
Polyphone disambiguation and accent prediction using pre-trained language models in Japanese TTS front-end
论文作者
论文摘要
尽管端到端文本到语音(TTS)模型可以产生自然语音,但在估计日本TTS系统中原始文本的句子级语音和韵律信息时,仍然存在挑战。在本文中,我们提出了一种用于多人歧义(PD)和口音预测(AP)的方法。所提出的方法结合了从形态分析中提取的明确特征和从预训练的语言模型(PLM)中提取的隐式特征。我们使用Bert和Flair嵌入作为隐式特征,并检查如何将它们与明确特征相结合。我们的客观评估结果表明,所提出的方法在PD中提高了5.7点的精度,在AP中提高了6.0分。此外,感知聆听测试结果证实,采用我们提出的模型作为前端的TTS系统达到了平均意见评分,其意见分数接近于综合语音的基本真相,并在自然性方面具有口音。
Although end-to-end text-to-speech (TTS) models can generate natural speech, challenges still remain when it comes to estimating sentence-level phonetic and prosodic information from raw text in Japanese TTS systems. In this paper, we propose a method for polyphone disambiguation (PD) and accent prediction (AP). The proposed method incorporates explicit features extracted from morphological analysis and implicit features extracted from pre-trained language models (PLMs). We use BERT and Flair embeddings as implicit features and examine how to combine them with explicit features. Our objective evaluation results showed that the proposed method improved the accuracy by 5.7 points in PD and 6.0 points in AP. Moreover, the perceptual listening test results confirmed that a TTS system employing our proposed model as a front-end achieved a mean opinion score close to that of synthesized speech with ground-truth pronunciation and accent in terms of naturalness.