论文标题
跨模式知识转移,没有任务与源数据
Cross-Modal Knowledge Transfer Without Task-Relevant Source Data
论文作者
论文摘要
现在,具有成本效益的深度和红外传感器作为常规RGB传感器的替代方案已成为现实,并且在自主导航和遥控传感等域中具有比RGB的优势。因此,建立计算机视觉和深度学习系统以进行深度和红外数据至关重要。但是,仍然缺乏针对这些模式的大型标记数据集。在这种情况下,将知识从源模式(RGB)的良好标记的大型数据集中训练的神经网络转移到在目标模式(深度,红外等)上工作的神经网络具有很大的价值。由于内存和隐私等原因,可能无法访问源数据,而知识转移只需要与源模型一起使用。我们描述了一个有效的解决方案,插座:无源的跨模式知识转移,用于将知识从一个源模式转移到不同目标模式的具有挑战性的任务,而无需访问与任务相关的源数据。该框架使用配对的任务 - IRRELELERVANT数据以及将目标特征的平均值和方差与源模型中存在的批处理统计信息匹配,从而减少了模态差距。我们通过广泛的实验表明,我们的方法明显优于无法解释模式差距的分类任务的现有无源方法。
Cost-effective depth and infrared sensors as alternatives to usual RGB sensors are now a reality, and have some advantages over RGB in domains like autonomous navigation and remote sensing. As such, building computer vision and deep learning systems for depth and infrared data are crucial. However, large labeled datasets for these modalities are still lacking. In such cases, transferring knowledge from a neural network trained on a well-labeled large dataset in the source modality (RGB) to a neural network that works on a target modality (depth, infrared, etc.) is of great value. For reasons like memory and privacy, it may not be possible to access the source data, and knowledge transfer needs to work with only the source models. We describe an effective solution, SOCKET: SOurce-free Cross-modal KnowledgE Transfer for this challenging task of transferring knowledge from one source modality to a different target modality without access to task-relevant source data. The framework reduces the modality gap using paired task-irrelevant data, as well as by matching the mean and variance of the target features with the batch-norm statistics that are present in the source models. We show through extensive experiments that our method significantly outperforms existing source-free methods for classification tasks which do not account for the modality gap.