论文标题

关于每个样本的转向多功能,用于多任务学习

On Steering Multi-Annotations per Sample for Multi-Task Learning

论文作者

Li, Yuanze, Guo, Yiwen, Li, Qizhang, Zhang, Hongzhi, Zuo, Wangmeng

论文摘要

多任务学习的研究吸引了社区的极大关注。尽管取得了显着的进步,但最佳学习不同任务的挑战仍有待探索。以前的工作试图修改不同任务的梯度。然而,这些方法给出了对任务之间关系的主观假设,而修改后的梯度可能不太准确。在本文中,我们介绍了随机任务分配〜(STA),该机制通过任务分配方法来解决此问题,其中每个样本被随机分配了一部分任务。为了进一步的进展,我们将交织的随机任务分配〜(ISTA)提出,迭代在几个连续的迭代过程中将所有任务分配给每个示例。我们在各种数据集和应用程序上评估了STA和ISTA:NYUV2,CityScapes和Coco,以了解场景的理解和实例细分。我们的实验表明,STA和ISTA的表现优于当前最新方法。代码将可用。

The study of multi-task learning has drawn great attention from the community. Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored. Previous works attempt to modify the gradients from different tasks. Yet these methods give a subjective assumption of the relationship between tasks, and the modified gradient may be less accurate. In this paper, we introduce Stochastic Task Allocation~(STA), a mechanism that addresses this issue by a task allocation approach, in which each sample is randomly allocated a subset of tasks. For further progress, we propose Interleaved Stochastic Task Allocation~(ISTA) to iteratively allocate all tasks to each example during several consecutive iterations. We evaluate STA and ISTA on various datasets and applications: NYUv2, Cityscapes, and COCO for scene understanding and instance segmentation. Our experiments show both STA and ISTA outperform current state-of-the-art methods. The code will be available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源