论文标题
Mavil:蒙面的音频学习者
MAViL: Masked Audio-Video Learners
论文作者
论文摘要
我们提出蒙面的音频学习者(MAVIL)来培训视听表示。我们的方法以三种互补形式的自我统计来学习:(1)重建掩盖音频和视频输入数据,(2)使用掩盖的内部和模态间对比度学习,以及(3)通过从前两个目标中汲取的联合音频效率化功能来重建联合音频效率。使用Mavil进行预训练不仅使模型能够在视听分类和检索任务中表现良好,而且可以隔离地改善每种模式的表示,而无需使用其他模式的信息进行微调或推理。从经验上讲,Mavil在Audioset(53.1 MAP)和VGGSOUND(67.1%精度)上设置了新的最新技术。第一次,一个自我监管的音频模型优于在这些基准上使用外部监督的模型。
We present Masked Audio-Video Learners (MAViL) to train audio-visual representations. Our approach learns with three complementary forms of self-supervision: (1) reconstruction of masked audio and video input data, (2) intra- and inter-modal contrastive learning with masking, and (3) self-training by reconstructing joint audio-video contextualized features learned from the first two objectives. Pre-training with MAViL not only enables the model to perform well in audio-visual classification and retrieval tasks but also improves representations of each modality in isolation, without using information from the other modality for fine-tuning or inference. Empirically, MAViL sets a new state-of-the-art on AudioSet (53.1 mAP) and VGGSound (67.1% accuracy). For the first time, a self-supervised audio-visual model outperforms ones that use external supervision on these benchmarks.