论文标题
学习视图和目标不变的视觉伺服宣传
Learning View and Target Invariant Visual Servoing for Navigation
论文作者
论文摘要
深度强化学习的进步最近恢复了对基于数据驱动的学习导航方法的兴趣。在本文中,我们建议学习用于本地移动机器人导航的视点不变和目标不变的视觉伺服。鉴于初始视图和目标视图或目标的图像,我们训练深度卷积网络控制器以达到所需的目标。我们为此任务提供了一个新的体系结构,该架构取决于建立由传统反馈控制错误激发的初始和目标视图与新颖奖励结构之间的对应关系的能力。提出的模型的优点是,它不需要校准和深度信息,并且可以在各种环境和目标中实现强大的视觉伺服,而无需任何参数微调。我们介绍了对方法的全面评估,并与其他深度学习体系结构以及视觉逼真的模拟环境中的经典视觉宣誓方法进行了比较。与以前的学习方法相比,所提出的模型克服了基于经典的视觉致毒方法的脆性,并获得了更高的概括能力。
The advances in deep reinforcement learning recently revived interest in data-driven learning based approaches to navigation. In this paper we propose to learn viewpoint invariant and target invariant visual servoing for local mobile robot navigation; given an initial view and the goal view or an image of a target, we train deep convolutional network controller to reach the desired goal. We present a new architecture for this task which rests on the ability of establishing correspondences between the initial and goal view and novel reward structure motivated by the traditional feedback control error. The advantage of the proposed model is that it does not require calibration and depth information and achieves robust visual servoing in a variety of environments and targets without any parameter fine tuning. We present comprehensive evaluation of the approach and comparison with other deep learning architectures as well as classical visual servoing methods in visually realistic simulation environment. The presented model overcomes the brittleness of classical visual servoing based methods and achieves significantly higher generalization capability compared to the previous learning approaches.