论文标题
枫木:半监督点云动作识别的蒙版伪标记自动编码器
MAPLE: Masked Pseudo-Labeling autoEncoder for Semi-supervised Point Cloud Action Recognition
论文作者
论文摘要
由于其广泛的应用,例如自动驾驶,机器人技术等,认识到Point Cloud视频的人类行动引起了学术界和行业的极大关注。但是,当前的点云动作识别方法通常需要大量的数据,其中具有手动注释,并且具有高计算成本的复杂骨干网络,这使得对现实世界应用程序不切实际。因此,本文考虑了半监督点云动作识别的任务。我们提出了一个蒙版的伪标记自动编码器(\ textbf {Maple})框架,以学习有效表示,以较少的注释以供点云动作识别。特别是,我们设计了一种新颖有效的\ textbf {de}耦合\ textbf {s} patial- \ textbf {t} emporal trans \ textbf {textbf {textbf {\ textbf {destformer})作为枫木的backbone。在Destformer中,将4D点云视频的空间和时间维度分离为实现长期和短期特征的有效自我注意。此外,要从更少的注释中学习判别特征,我们设计了一个蒙版的伪标记自动编码器结构,以指导Destformer从可用框架中重建蒙面帧的功能。更重要的是,对于未标记的数据,我们从分类头中利用伪标签作为从蒙版框架重建特征的监督信号。最后,全面的实验表明,枫树在三个公共基准上取得了优越的结果,并且在MSR-ACTION3D数据集上以8.08 \%的精度优于最先进的方法。
Recognizing human actions from point cloud videos has attracted tremendous attention from both academia and industry due to its wide applications like automatic driving, robotics, and so on. However, current methods for point cloud action recognition usually require a huge amount of data with manual annotations and a complex backbone network with high computation costs, which makes it impractical for real-world applications. Therefore, this paper considers the task of semi-supervised point cloud action recognition. We propose a Masked Pseudo-Labeling autoEncoder (\textbf{MAPLE}) framework to learn effective representations with much fewer annotations for point cloud action recognition. In particular, we design a novel and efficient \textbf{De}coupled \textbf{s}patial-\textbf{t}emporal Trans\textbf{Former} (\textbf{DestFormer}) as the backbone of MAPLE. In DestFormer, the spatial and temporal dimensions of the 4D point cloud videos are decoupled to achieve efficient self-attention for learning both long-term and short-term features. Moreover, to learn discriminative features from fewer annotations, we design a masked pseudo-labeling autoencoder structure to guide the DestFormer to reconstruct features of masked frames from the available frames. More importantly, for unlabeled data, we exploit the pseudo-labels from the classification head as the supervision signal for the reconstruction of features from the masked frames. Finally, comprehensive experiments demonstrate that MAPLE achieves superior results on three public benchmarks and outperforms the state-of-the-art method by 8.08\% accuracy on the MSR-Action3D dataset.