论文标题
Exagt:专家指导的增强,用于表示交通方案的学习
ExAgt: Expert-guided Augmentation for Representation Learning of Traffic Scenarios
论文作者
论文摘要
近年来,用自学的学习方法解决了代表性学习。输入数据被增强为两个失真的视图,编码器学习了扭曲不变的表示形式 - 跨视图预测。增强是跨视图自我监督学习框架中学习视觉表示的关键组成部分之一。本文介绍了Exagt,这是一种新颖的方法,旨在包括增加交通情况的专家知识,以改善没有任何人类注释的学习表现。根据基础架构,自我与交通参与者之间的相互作用以及理想的传感器模型,以自动化的方式生成专家指导的增强。 Exagt方法应用于两个最先进的跨视图预测方法,并在分类和聚类等下游任务中测试了所学的表示。结果表明,与仅使用标准增强相比,EXAGT方法改善了表示的学习,并提供了更好的表示空间稳定性。该代码可在https://github.com/lab176344/exagt上找到。
Representation learning in recent years has been addressed with self-supervised learning methods. The input data is augmented into two distorted views and an encoder learns the representations that are invariant to distortions -- cross-view prediction. Augmentation is one of the key components in cross-view self-supervised learning frameworks to learn visual representations. This paper presents ExAgt, a novel method to include expert knowledge for augmenting traffic scenarios, to improve the learnt representations without any human annotation. The expert-guided augmentations are generated in an automated fashion based on the infrastructure, the interactions between the EGO and the traffic participants and an ideal sensor model. The ExAgt method is applied in two state-of-the-art cross-view prediction methods and the representations learnt are tested in downstream tasks like classification and clustering. Results show that the ExAgt method improves representation learning compared to using only standard augmentations and it provides a better representation space stability. The code is available at https://github.com/lab176344/ExAgt.