论文标题

视觉符合无线定位:有效的人重新识别,并经常性上下文传播

Vision Meets Wireless Positioning: Effective Person Re-identification with Recurrent Context Propagation

论文作者

Liu, Yiheng, Zhou, Wengang, Xi, Mao, Shen, Sanjing, Li, Houqiang

论文摘要

现有的重新识别方法依靠视觉传感器来捕获行人。来自视觉传感器的图像或视频数据不可避免地会遭受行人姿势的阻塞和戏剧性变化,这会降低重新识别性能并进一步限制其在开放环境中的应用。另一方面,对于大多数人来说,最重要的随身携带项目之一是手机,可以通过无线定位信号的形式通过WiFi和蜂窝网络感知手机。这种信号对行人的阻塞和视觉外观变化是可靠的,但遭受了一些定位误差。在这项工作中,我们通过视觉和无线定位的感应数据来对待人员重新识别。为了利用这种跨模式提示,我们提出了一个新颖的经常性上下文传播模块,该模块使信息能够在视觉数据和无线定位数据之间传播,并最终提高了匹配的准确性。为了评估我们的方法,我们贡献了一个新的无线定位人员重新识别(WP-REID)数据集。进行了广泛的实验,并证明了所提出算法的有效性。代码将在https://github.com/yolomax/wp-reid上发布。

Existing person re-identification methods rely on the visual sensor to capture the pedestrians. The image or video data from visual sensor inevitably suffers the occlusion and dramatic variations of pedestrian postures, which degrades the re-identification performance and further limits its application to the open environment. On the other hand, for most people, one of the most important carry-on items is the mobile phone, which can be sensed by WiFi and cellular networks in the form of a wireless positioning signal. Such signal is robust to the pedestrian occlusion and visual appearance change, but suffers some positioning error. In this work, we approach person re-identification with the sensing data from both vision and wireless positioning. To take advantage of such cross-modality cues, we propose a novel recurrent context propagation module that enables information to propagate between visual data and wireless positioning data and finally improves the matching accuracy. To evaluate our approach, we contribute a new Wireless Positioning Person Re-identification (WP-ReID) dataset. Extensive experiments are conducted and demonstrate the effectiveness of the proposed algorithm. Code will be released at https://github.com/yolomax/WP-ReID.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源