论文标题

通过小型配对数据学习描述与动作之间的双向翻译

Learning Bidirectional Translation between Descriptions and Actions with Small Paired Data

论文作者

Toyoda, Minori, Suzuki, Kanata, Hayashi, Yoshihiko, Ogata, Tetsuya

论文摘要

这项研究使用来自不同方式的小配对数据实现了描述与动作之间的双向翻译。相互生成描述和动作的能力对于机器人在日常生活中与人类进行协作至关重要,这通常需要一个大型数据集,该数据集可维护两种模态数据的全面对。但是,配对的数据集构造昂贵,很难收集。为了解决这个问题,本研究提出了一种双向翻译的两阶段培训方法。在拟议的方法中,我们训练经常性自动编码器(RAES),以使用大量非生产数据进行描述和动作。然后,我们对整个模型进行验证,以使用小配对数据绑定其中间表示。由于用于培训预训练的数据不需要配对,因此可以使用仅行为的数据或大型语言语料库。我们使用配对数据集对我们的方法进行了实验评估,该数据集由运动捕获动作和描述组成。结果表明,即使要训练的配对数据量很小,我们的方法也表现良好。每个RAE的中间表示的可视化表明,相似的作用是在簇位置上编码的,并且相应的特征向量很好地排列。

This study achieved bidirectional translation between descriptions and actions using small paired data from different modalities. The ability to mutually generate descriptions and actions is essential for robots to collaborate with humans in their daily lives, which generally requires a large dataset that maintains comprehensive pairs of both modality data. However, a paired dataset is expensive to construct and difficult to collect. To address this issue, this study proposes a two-stage training method for bidirectional translation. In the proposed method, we train recurrent autoencoders (RAEs) for descriptions and actions with a large amount of non-paired data. Then, we finetune the entire model to bind their intermediate representations using small paired data. Because the data used for pre-training do not require pairing, behavior-only data or a large language corpus can be used. We experimentally evaluated our method using a paired dataset consisting of motion-captured actions and descriptions. The results showed that our method performed well, even when the amount of paired data to train was small. The visualization of the intermediate representations of each RAE showed that similar actions were encoded in a clustered position and the corresponding feature vectors were well aligned.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源