论文标题
SpeechLMScore:使用语音语言模型评估语音生成
SpeechLMScore: Evaluating speech generation using speech language model
论文作者
论文摘要
尽管人类评估是评估语音产生系统的最可靠的指标,但通常是昂贵且耗时的。先前关于语音质量评估的研究通过使用机器学习模型来预测人类评估得分来解决该问题。但是,他们依靠有监督的学习,因此遭受了高注释成本和域转移问题的困扰。我们建议使用语音语言模型评估无监督的指标,以评估生成的语音。 SecemlmScore通过将语音信号映射到离散令牌并测量产生令牌序列的平均概率来计算语音信号的平均对数概率。因此,它不需要人类注释,并且是一个高度可扩展的框架。评估结果表明,所提出的指标表明了与人类评估分数在不同语音生成任务(包括语音转换,文本到语音和语音增强)的有前途的相关性。
While human evaluation is the most reliable metric for evaluating speech generation systems, it is generally costly and time-consuming. Previous studies on automatic speech quality assessment address the problem by predicting human evaluation scores with machine learning models. However, they rely on supervised learning and thus suffer from high annotation costs and domain-shift problems. We propose SpeechLMScore, an unsupervised metric to evaluate generated speech using a speech-language model. SpeechLMScore computes the average log-probability of a speech signal by mapping it into discrete tokens and measures the average probability of generating the sequence of tokens. Therefore, it does not require human annotation and is a highly scalable framework. Evaluation results demonstrate that the proposed metric shows a promising correlation with human evaluation scores on different speech generation tasks including voice conversion, text-to-speech, and speech enhancement.