论文标题
社交vrnn:互动行人的一击多模式轨迹预测
Social-VRNN: One-Shot Multi-modal Trajectory Prediction for Interacting Pedestrians
论文作者
论文摘要
人类动作的预测是安全导航人类自动机器人的关键。在混乱的环境中,由于其与环境和其他行人的相互作用,可能会有几种运动假设。 以前用于估算多个运动假设的工作需要大量样本,这限制了其在实时运动计划中的适用性。在本文中,我们提出了一种基于深层生成神经网络的相互作用和多模式轨迹预测的差异学习方法。 我们的方法可以达到更快的收敛性,并且与最先进的方法相比,需要更少的样本。对真实和仿真数据的实验结果表明,我们的模型可以有效地学习推断出不同的轨迹。我们将方法与三种基线方法进行了比较,并且目前的性能结果表明,我们的生成模型可以通过产生各种轨迹来实现轨迹预测的更高准确性。
Prediction of human motions is key for safe navigation of autonomous robots among humans. In cluttered environments, several motion hypotheses may exist for a pedestrian, due to its interactions with the environment and other pedestrians. Previous works for estimating multiple motion hypotheses require a large number of samples which limits their applicability in real-time motion planning. In this paper, we present a variational learning approach for interaction-aware and multi-modal trajectory prediction based on deep generative neural networks. Our approach can achieve faster convergence and requires significantly fewer samples comparing to state-of-the-art methods. Experimental results on real and simulation data show that our model can effectively learn to infer different trajectories. We compare our method with three baseline approaches and present performance results demonstrating that our generative model can achieve higher accuracy for trajectory prediction by producing diverse trajectories.