论文标题

通过深层生成提高机器人抓握能力

Improving Robotic Grasping Ability Through Deep Shape Generation

论文作者

Jiang, Junnan, Tu, Yuyang, Xiao, Xiaohui, Fu, Zhongtao, Zhang, Jianwei, Chen, Fei, Li, Miao

论文摘要

数据驱动的方法已成为机器人Grasp计划的主要范式。但是,这些方法的性能受到可用培训数据质量的极大影响。在本文中,我们提出了一个框架来生成对象形状,以提高握把的数据集质量,从而增强了预先设计的基于学习的GRASP计划网络的掌握能力。在此框架中,使用基于自动编码器(Encoder-Decoder)的结构网络将对象形状嵌入到低维特征空间中。使用异常检测和掌握质量标准为每个对象形状定义了稀有性和掌握得分。随后,在特征空间中生成了新的对象形状,以利用原始的高稀有性和掌握分数对象的特征,可用于增强握把数据集。最后,从仿真和现实世界实验获得的结果表明,基于学习的对象形状可以有效提高基于学习的GRASP计划网络的抓地力。

Data-driven approaches have become a dominant paradigm for robotic grasp planning. However, the performance of these approaches is enormously influenced by the quality of the available training data. In this paper, we propose a framework to generate object shapes to improve the grasping dataset quality, thus enhancing the grasping ability of a pre-designed learning-based grasp planning network. In this framework, the object shapes are embedded into a low-dimensional feature space using an AutoEncoder (encoder-decoder) based structure network. The rarity and graspness scores are defined for each object shape using outlier detection and grasp-quality criteria. Subsequently, new object shapes are generated in feature space that leverages the original high rarity and graspness score objects' features, which can be employed to augment the grasping dataset. Finally, the results obtained from the simulation and real-world experiments demonstrate that the grasping ability of the learning-based grasp planning network can be effectively improved with the generated object shapes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源