论文标题

为监督学习量身定制自学

Tailoring Self-Supervision for Supervised Learning

论文作者

Moon, WonJun, Kim, Ji-Hwan, Heo, Jae-Pil

论文摘要

最近,可以表明,部署适当的自学是提高监督学习表现的一种前瞻性方法。然而,由于以前的借口任务专门用于无监督的代表学习,因此并未完全利用自我意识的好处。为此,我们首先为此类辅助任务提供了三个理想的属性,以协助监督目标。首先,任务需要指导模型学习丰富的功能。其次,涉及的自我规定的转换不应显着改变训练分布。第三,对于对先前艺术的高度适用性,任务是轻巧和通用的。随后,为了展示现有的借口任务如何实现这些任务并针对监督学习量身定制,我们提出了一个简单的辅助自学任务,可以预测可本地化的旋转(LOROT)。我们的详尽实验验证了洛洛特(Lorot)的优点,这是一项借口任务,该任务是根据稳健性和概括能力来监督学习的。我们的代码可在https://github.com/wjun0830/localizable-rotation上找到。

Recently, it is shown that deploying a proper self-supervision is a prospective way to enhance the performance of supervised learning. Yet, the benefits of self-supervision are not fully exploited as previous pretext tasks are specialized for unsupervised representation learning. To this end, we begin by presenting three desirable properties for such auxiliary tasks to assist the supervised objective. First, the tasks need to guide the model to learn rich features. Second, the transformations involved in the self-supervision should not significantly alter the training distribution. Third, the tasks are preferred to be light and generic for high applicability to prior arts. Subsequently, to show how existing pretext tasks can fulfill these and be tailored for supervised learning, we propose a simple auxiliary self-supervision task, predicting localizable rotation (LoRot). Our exhaustive experiments validate the merits of LoRot as a pretext task tailored for supervised learning in terms of robustness and generalization capability. Our code is available at https://github.com/wjun0830/Localizable-Rotation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源