论文标题

基于视频的姿势估计数据作为人类活动识别中转移学习的来源

Video-based Pose-Estimation Data as Source for Transfer Learning in Human Activity Recognition

论文作者

Awasthi, Shrutarv, Rueda, Fernando Moya, Fink, Gernot A.

论文摘要

使用体内设备的人类活动识别(HAR)在不受约束的环境中识别特定的人类行为。 Har由于人类运动的间和内部变化而具有挑战性。此外,从体内设备的注释数据集很少。这个问题主要是由于数据创建的困难,即记录,昂贵的注释以及缺乏人类活动的标准定义。先前的作品表明,转移学习是用稀缺数据解决方案的好策略。但是,注释的车上设备数据集的稀缺性仍然存在。本文建议使用用于人置估计的数据集作为转移学习的来源;具体而言,它从视频数据集中部署了人类关节的带注释的像素坐标的序列,以进行HAR和人姿势估计。我们在四个基于基于基础视频的源数据集的深度体系结构中进行了深入培训。最后,对三个体内设备数据集进行了评估,以改善HAR性能。

Human Activity Recognition (HAR) using on-body devices identifies specific human actions in unconstrained environments. HAR is challenging due to the inter and intra-variance of human movements; moreover, annotated datasets from on-body devices are scarce. This problem is mainly due to the difficulty of data creation, i.e., recording, expensive annotation, and lack of standard definitions of human activities. Previous works demonstrated that transfer learning is a good strategy for addressing scenarios with scarce data. However, the scarcity of annotated on-body device datasets remains. This paper proposes using datasets intended for human-pose estimation as a source for transfer learning; specifically, it deploys sequences of annotated pixel coordinates of human joints from video datasets for HAR and human pose estimation. We pre-train a deep architecture on four benchmark video-based source datasets. Finally, an evaluation is carried out on three on-body device datasets improving HAR performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源