论文标题
触觉浏览:使用图形卷积网络从触觉数据中学习形状描述符
Tactile-ViewGCN: Learning Shape Descriptor from Tactile Data using Graph Convolutional Network
论文作者
论文摘要
对于人类而言,我们的“触摸感”始终是我们在任何环境中精确有效地操纵所有形状的对象的能力一直是必要的,但是直到最近,还没有做很多事情来完全理解触觉反馈。这项工作提出了一种新的方法来获得更好的形状描述符,而不是现有方法,该方法是从触觉手套中收集的多个触觉数据分类对象的现有方法。它着重于使用触觉数据改进对象分类的先前工作。来自多个触觉数据的对象分类的主要问题是找到一种从多个触觉图像中提取的汇总特征的好方法。我们提出了一种新颖的方法,称为触觉 - 视频,它通过使用图形卷积网络来考虑不同特征之间关系的层次汇总触觉特征。我们的模型的精度为81.82%,在Stag数据集上的先前方法优于先前的方法。
For humans, our "senses of touch" have always been necessary for our ability to precisely and efficiently manipulate objects of all shapes in any environment, but until recently, not many works have been done to fully understand haptic feedback. This work proposed a novel method for getting a better shape descriptor than existing methods for classifying an object from multiple tactile data collected from a tactile glove. It focuses on improving previous works on object classification using tactile data. The major problem for object classification from multiple tactile data is to find a good way to aggregate features extracted from multiple tactile images. We propose a novel method, dubbed as Tactile-ViewGCN, that hierarchically aggregate tactile features considering relations among different features by using Graph Convolutional Network. Our model outperforms previous methods on the STAG dataset with an accuracy of 81.82%.