论文标题

NAVDREAMS:朝着人类中的仅相机RL导航

NavDreams: Towards Camera-Only RL Navigation Among Humans

论文作者

Dugas, Daniel, Andersson, Olov, Siegwart, Roland, Chung, Jen Jen

论文摘要

自主在日常拥挤的空间中自主浏览机器人需要解决复杂的感知和计划挑战。当仅使用单眼图像传感器数据作为输入时,无法使用经典的二维计划方法。尽管图像在感知和计划方面带来了重大挑战,但它们还允许捕获潜在的重要细节,例如复杂的几何形状,身体运动和其他视觉提示。为了成功地从图像中成功求解导航任务,算法必须仅使用此信息渠道来对场景及其动态进行建模。我们调查了世界模型概念是否显示了Atari游戏中建模和学习政策的最新结果以及基于2D激光雷达的人群导航的有希望的结果,也可以应用于基于摄像机的导航问题。为此,我们创建了模拟环境,机器人必须在不相撞以实现其目标的情况下浏览过去的静态和移动人类。我们发现,最先进的方法能够在解决导航问题方面取得成功,并且可以对未来的图像序列产生类似梦想的预测,这些预测显示出一致的几何形状和移动的人。我们还能够通过在真实机器人上测试策略来表明我们高保真的SIM2REAL模拟方案中的政策绩效转移到现实世界中。我们可以在https://github.com/danieldugas/navdreams上提供模拟器,模型和实验。

Autonomously navigating a robot in everyday crowded spaces requires solving complex perception and planning challenges. When using only monocular image sensor data as input, classical two-dimensional planning approaches cannot be used. While images present a significant challenge when it comes to perception and planning, they also allow capturing potentially important details, such as complex geometry, body movement, and other visual cues. In order to successfully solve the navigation task from only images, algorithms must be able to model the scene and its dynamics using only this channel of information. We investigate whether the world model concept, which has shown state-of-the-art results for modeling and learning policies in Atari games as well as promising results in 2D LiDAR-based crowd navigation, can also be applied to the camera-based navigation problem. To this end, we create simulated environments where a robot must navigate past static and moving humans without colliding in order to reach its goal. We find that state-of-the-art methods are able to achieve success in solving the navigation problem, and can generate dream-like predictions of future image-sequences which show consistent geometry and moving persons. We are also able to show that policy performance in our high-fidelity sim2real simulation scenario transfers to the real world by testing the policy on a real robot. We make our simulator, models and experiments available at https://github.com/danieldugas/NavDreams.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源