论文标题
通过密集的时空位置编码轨迹目标
Track Targets by Dense Spatio-Temporal Position Encoding
论文作者
论文摘要
在这项工作中,我们提出了一种新颖的范式,以编码使用变压器在视频中进行目标跟踪的目标位置。提出的范式,密集的时空(DST)位置编码,以像素密集的方式编码时空位置信息。提供的位置编码提供了位置信息,以通过比较两个边界框中的对象来关联外观匹配的框架目标。与典型的变压器位置编码相比,我们提出的编码应用于2D CNN功能,而不是投影的特征向量,以避免丢失位置信息。此外,设计的DST编码可以代表单帧对象的位置以及轨迹之间轨迹位置的演变。与DST编码集成在一起,我们构建了一个基于变压器的多目标跟踪模型。该模型将视频剪辑作为输入,并在剪辑中进行目标关联。它还可以通过将现有轨迹与新框架的对象相关联来在线推断。视频多对象跟踪(MOT)和多对象跟踪和分割(MOTS)数据集的实验证明了所提出的DST位置编码的有效性。
In this work, we propose a novel paradigm to encode the position of targets for target tracking in videos using transformers. The proposed paradigm, Dense Spatio-Temporal (DST) position encoding, encodes spatio-temporal position information in a pixel-wise dense fashion. The provided position encoding provides location information to associate targets across frames beyond appearance matching by comparing objects in two bounding boxes. Compared to the typical transformer positional encoding, our proposed encoding is applied to the 2D CNN features instead of the projected feature vectors to avoid losing positional information. Moreover, the designed DST encoding can represent the location of a single-frame object and the evolution of the location of the trajectory among frames uniformly. Integrated with the DST encoding, we build a transformer-based multi-object tracking model. The model takes a video clip as input and conducts the target association in the clip. It can also perform online inference by associating existing trajectories with objects from the new-coming frames. Experiments on video multi-object tracking (MOT) and multi-object tracking and segmentation (MOTS) datasets demonstrate the effectiveness of the proposed DST position encoding.