论文标题
使用胶囊的多尺度部分表示转换的步态识别
Gait Recognition using Multi-Scale Partial Representation Transformation with Capsules
论文作者
论文摘要
步态识别是指根据他们行走的方式识别个体,由于相机的观点和个人外观的变化,可能会非常具有挑战性。当前的步态识别方法已由深度学习模型(尤其是基于部分特征表示)主导。在这种情况下,我们提出了一个新颖的深层网络,即学习使用胶囊传输多尺度的部分步态表示,以获得更具歧视性的步态特征。我们的网络首先使用最先进的深层特征提取器获得多尺度的部分表示。然后,它偶然使用双向门控复发单元(BGRU),在向前和向后方向的部分特征之间的模式之间的相关性和共存在。最后,采用胶囊网络来学习更深层次的整体关系,并为更相关的功能分配更多权重,同时忽略虚假的维度。这样,我们获得的最终功能对观看和外观变化更加可靠。我们的方法的性能已在两个挑战性的测试协议上进行了两个步态识别数据集(Casia-B和OU-MVLP)进行了广泛的测试。我们的方法的结果已与最先进的步态识别解决方案进行了比较,该解决方案显示了我们模型的优势,特别是在面对挑战性的观看和携带条件时。
Gait recognition, referring to the identification of individuals based on the manner in which they walk, can be very challenging due to the variations in the viewpoint of the camera and the appearance of individuals. Current methods for gait recognition have been dominated by deep learning models, notably those based on partial feature representations. In this context, we propose a novel deep network, learning to transfer multi-scale partial gait representations using capsules to obtain more discriminative gait features. Our network first obtains multi-scale partial representations using a state-of-the-art deep partial feature extractor. It then recurrently learns the correlations and co-occurrences of the patterns among the partial features in forward and backward directions using Bi-directional Gated Recurrent Units (BGRU). Finally, a capsule network is adopted to learn deeper part-whole relationships and assigns more weights to the more relevant features while ignoring the spurious dimensions. That way, we obtain final features that are more robust to both viewing and appearance changes. The performance of our method has been extensively tested on two gait recognition datasets, CASIA-B and OU-MVLP, using four challenging test protocols. The results of our method have been compared to the state-of-the-art gait recognition solutions, showing the superiority of our model, notably when facing challenging viewing and carrying conditions.