论文标题
在几个视频中本地将共同的动作定位
Localizing the Common Action Among a Few Videos
论文作者
论文摘要
本文努力将动作的时间范围定位在长期未修剪的视频中。如果现有工作从一开始,结局和/或训练时间的行动类别来利用许多示例,我们提出了很少的共同行动定位。长期未修剪视频中动作的开始和结尾是基于仅由包含相同动作的手工修剪的视频示例来确定的,而不知道其共同的类标签。为了解决此任务,我们引入了一个新的3D卷积网络体系结构,能够使支持视频的表示形式与相关的查询视频段相一致。该网络包含:(\ textit {i})相互增强模块,以同时补充少数修剪的支持视频和未修剪的查询视频的表示; (\ textit {ii})迭代将支持视频融合到查询分支的渐进式对齐模块; (\ textit {iii})一个成对匹配的模块,以权衡不同支持视频的重要性。在包含单个或多个动作实例的未修剪视频中评估少数射击的共同动作定位,证明了我们的提案的有效性和一般适用性。
This paper strives to localize the temporal extent of an action in a long untrimmed video. Where existing work leverages many examples with their start, their ending, and/or the class of the action during training time, we propose few-shot common action localization. The start and end of an action in a long untrimmed video is determined based on just a hand-full of trimmed video examples containing the same action, without knowing their common class label. To address this task, we introduce a new 3D convolutional network architecture able to align representations from the support videos with the relevant query video segments. The network contains: (\textit{i}) a mutual enhancement module to simultaneously complement the representation of the few trimmed support videos and the untrimmed query video; (\textit{ii}) a progressive alignment module that iteratively fuses the support videos into the query branch; and (\textit{iii}) a pairwise matching module to weigh the importance of different support videos. Evaluation of few-shot common action localization in untrimmed videos containing a single or multiple action instances demonstrates the effectiveness and general applicability of our proposal.