论文标题
一种基于层次PLDA的歧视性语言识别模型
A Discriminative Hierarchical PLDA-based Model for Spoken Language Recognition
论文作者
论文摘要
口语识别(SLR)是指确定语音样本中存在的语言的自动过程。 SLR本身就是重要的任务,例如,作为分析或分类大量多语言数据的工具。此外,它也是选择工作流中下游应用程序的重要工具,例如,选择适当的语音识别或机器翻译模型。 SLR系统通常由两个阶段组成,一个阶段是提取代表音频样本的嵌入式,第二个是计算每种语言的最终得分的第二个。在这项工作中,我们将SLR任务作为检测问题处理,并作为概率线性判别分析(PLDA)模型实施第二阶段。我们表明,对PLDA参数的歧视性培训可为通常的生成培训带来很大的收益。此外,我们提出了一种新型的分层方法,其中训练了两个PLDA模型,一种用于生成高度相关语言群的分数,第二种是为每个群集的分数生成的分数。最终语言检测分数是作为这两组分数组合计算的。完整的模型进行了歧视训练以优化跨熵目标。我们表明,这种层次结构方法始终优于非等级结构,用于检测高度相关语言的方法,在许多情况下,通过很大的边距。我们在包括100多种语言的数据集中训练系统,并在匹配和不匹配的条件下对其进行测试,这表明增益可稳健地条件不匹配。
Spoken language recognition (SLR) refers to the automatic process used to determine the language present in a speech sample. SLR is an important task in its own right, for example, as a tool to analyze or categorize large amounts of multi-lingual data. Further, it is also an essential tool for selecting downstream applications in a work flow, for example, to chose appropriate speech recognition or machine translation models. SLR systems are usually composed of two stages, one where an embedding representing the audio sample is extracted and a second one which computes the final scores for each language. In this work, we approach the SLR task as a detection problem and implement the second stage as a probabilistic linear discriminant analysis (PLDA) model. We show that discriminative training of the PLDA parameters gives large gains with respect to the usual generative training. Further, we propose a novel hierarchical approach where two PLDA models are trained, one to generate scores for clusters of highly-related languages and a second one to generate scores conditional to each cluster. The final language detection scores are computed as a combination of these two sets of scores. The complete model is trained discriminatively to optimize a cross-entropy objective. We show that this hierarchical approach consistently outperforms the non-hierarchical one for detection of highly related languages, in many cases by large margins. We train our systems on a collection of datasets including over 100 languages, and test them both on matched and mismatched conditions, showing that the gains are robust to condition mismatch.