论文标题
旋转不变的混合图形模型网络,用于2D手姿势估计
Rotation-invariant Mixed Graphical Model Network for 2D Hand Pose Estimation
论文作者
论文摘要
在本文中,我们提出了一个名为Rot-Invariant混合图形模型网络(R-MGMN)的新体系结构,以从单眼RGB图像中解决2D手姿势估计的问题。通过整合旋转网,R-MGMN与图像中的手的旋转不变。它还具有一系列图形模型,可以从中选择图形模型的组合,并在输入图像上进行条件。信仰传播分别在每个图形模型上进行,生成一组边缘分布,这些分布被视为手关键点位置的置信图。最终的置信图是通过将这些置信图汇总在一起的。我们在两个公共姿势数据集上评估了R-MGMN。实验结果表明,我们的模型优于最先进的算法,该算法被广泛用于2D手姿势估计中的估计。
In this paper, we propose a new architecture named Rotation-invariant Mixed Graphical Model Network (R-MGMN) to solve the problem of 2D hand pose estimation from a monocular RGB image. By integrating a rotation net, the R-MGMN is invariant to rotations of the hand in the image. It also has a pool of graphical models, from which a combination of graphical models could be selected, conditioning on the input image. Belief propagation is performed on each graphical model separately, generating a set of marginal distributions, which are taken as the confidence maps of hand keypoint positions. Final confidence maps are obtained by aggregating these confidence maps together. We evaluate the R-MGMN on two public hand pose datasets. Experiment results show our model outperforms the state-of-the-art algorithm which is widely used in 2D hand pose estimation by a noticeable margin.