论文标题
基于变压器的视频前端,用于单一和多人视频的视听语音识别
Transformer-Based Video Front-Ends for Audio-Visual Speech Recognition for Single and Multi-Person Video
论文作者
论文摘要
视听自动语音识别(AV-ASR)通过引入视频方式作为其他信息来源,扩展了语音识别。在这项工作中,使用说话者嘴的运动中包含的信息用于增强音频功能。传统上,该视频模式是通过3D卷积神经网络(例如VGG的3D版本)处理的。最近,图像变压器网络ARXIV:2010.11929展示了为图像分类任务提取丰富的视觉特征的能力。在这里,我们建议用视频变压器替换3D卷积以提取视觉特征。我们在YouTube视频的大规模语料库上训练基准和提议的模型。在YouTube视频的标记子集以及LRS3-TED公共语料库中评估了我们的方法的性能。我们最好的仅视频模型在YTDEV18上获得了31.4%的WER,在LRS3-TED上获得了17.0%,比我们的卷积基线获得了10%和15%的相对改善。在微调模型(1.6%WER)之后,我们实现了在LRS3-TED上进行视听识别的最先进的状态。此外,在一系列关于多人AV-ASR的实验中,我们在卷积视频前端的平均相对降低了2%。
Audio-visual automatic speech recognition (AV-ASR) extends speech recognition by introducing the video modality as an additional source of information. In this work, the information contained in the motion of the speaker's mouth is used to augment the audio features. The video modality is traditionally processed with a 3D convolutional neural network (e.g. 3D version of VGG). Recently, image transformer networks arXiv:2010.11929 demonstrated the ability to extract rich visual features for image classification tasks. Here, we propose to replace the 3D convolution with a video transformer to extract visual features. We train our baselines and the proposed model on a large scale corpus of YouTube videos. The performance of our approach is evaluated on a labeled subset of YouTube videos as well as on the LRS3-TED public corpus. Our best video-only model obtains 31.4% WER on YTDEV18 and 17.0% on LRS3-TED, a 10% and 15% relative improvements over our convolutional baseline. We achieve the state of the art performance of the audio-visual recognition on the LRS3-TED after fine-tuning our model (1.6% WER). In addition, in a series of experiments on multi-person AV-ASR, we obtained an average relative reduction of 2% over our convolutional video frontend.