论文标题
跨架构自我监督的视频表示学习
Cross-Architecture Self-supervised Video Representation Learning
论文作者
论文摘要
在本文中,我们提出了一个新的跨架构对比学习(CACL)框架,用于自我监督视频表示学习。 CACL由3D CNN和一个视频变压器组成,该变压器并行用于生成多种积极对进行对比学习。这使模型可以从如此多样但有意义的对中学习强大的表示。此外,我们引入了一个时间自我监督的学习模块,能够在时间顺序中明确预测两个视频序列之间的编辑距离。这使该模型能够学习丰富的时间表示,以强烈补偿CACL学到的视频级表示。我们评估了在UCF101和HMDB51数据集上进行视频检索和动作识别的任务的方法,我们的方法可以实现出色的性能,超过了诸如Videomoco和Moco+BE等最新方法。该代码可在https://github.com/guoshengcv/cacl上提供。
In this paper, we present a new cross-architecture contrastive learning (CACL) framework for self-supervised video representation learning. CACL consists of a 3D CNN and a video transformer which are used in parallel to generate diverse positive pairs for contrastive learning. This allows the model to learn strong representations from such diverse yet meaningful pairs. Furthermore, we introduce a temporal self-supervised learning module able to predict an Edit distance explicitly between two video sequences in the temporal order. This enables the model to learn a rich temporal representation that compensates strongly to the video-level representation learned by the CACL. We evaluate our method on the tasks of video retrieval and action recognition on UCF101 and HMDB51 datasets, where our method achieves excellent performance, surpassing the state-of-the-art methods such as VideoMoCo and MoCo+BE by a large margin. The code is made available at https://github.com/guoshengcv/CACL.