论文标题
口语对话框中序列标签的层次预培训
Hierarchical Pre-training for Sequence Labelling in Spoken Dialog
论文作者
论文摘要
序列标记任务(例如对话行为)和情感/情感标识是口语对话系统的关键组成部分。在这项工作中,我们提出了一种新的方法来学习适合口语对话框的通用表示形式,我们在新的基准测试中进行了评估,我们称之为序列标记口语基准评估基准(\ textttt {silicone})。 \ texttt {有机硅}是模型 - 敏捷的,并且包含10个不同尺寸的不同数据集。我们使用基于变压器体系结构的层次编码器获得表示形式,为此我们扩展了两个众所周知的训练预训练目标。对OpenSubtitles进行预培训:一个大型的口语对话框,其中包含超过23亿美元的令牌。我们展示了与最先进模型相比,分层编码者如何以较少的参数获得竞争结果,并且我们表明了它们对预训练和微调的重要性。
Sequence labelling tasks like Dialog Act and Emotion/Sentiment identification are a key component of spoken dialog systems. In this work, we propose a new approach to learn generic representations adapted to spoken dialog, which we evaluate on a new benchmark we call Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE benchmark (\texttt{SILICONE}). \texttt{SILICONE} is model-agnostic and contains 10 different datasets of various sizes. We obtain our representations with a hierarchical encoder based on transformer architectures, for which we extend two well-known pre-training objectives. Pre-training is performed on OpenSubtitles: a large corpus of spoken dialog containing over $2.3$ billion of tokens. We demonstrate how hierarchical encoders achieve competitive results with consistently fewer parameters compared to state-of-the-art models and we show their importance for both pre-training and fine-tuning.