论文标题
通过使用辅助大型任务来学习具有不一致标签的多任务
Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big Task
论文作者
论文摘要
多任务学习是通过在任务之间转移和利用常识来提高模型的性能。现有的MTL主要集中在多个任务(MT)之间的标签集(MT)相同的方案,因此可以将它们用于跨任务学习。虽然几乎稀有的作品探讨了每个任务只有少量培训样本的情况,而它们的标签集仅部分重叠,甚至没有。学习此类MTS更具挑战性,因为这些任务之间可用的相关信息较少。为此,我们提出了一个框架来学习这些任务,通过共同利用来自学识渊博的辅助大型任务的大量信息,并提供足够多的课程来涵盖所有这些任务的信息,以及在部分重叠的任务中共享的信息。在我们使用学习辅助任务的相同神经网络体系结构来学习单个任务的实施中,关键思想是利用可用的标签信息来适应辅助网络的隐藏层神经元来为每个任务构造相应的网络,同时伴随跨单个任务的联合学习。我们的实验结果证明了与最先进的方法相比,其有效性。
Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks. Existing MTL works mainly focus on the scenario where label sets among multiple tasks (MTs) are usually the same, thus they can be utilized for learning across the tasks. While almost rare works explore the scenario where each task only has a small amount of training samples, and their label sets are just partially overlapped or even not. Learning such MTs is more challenging because of less correlation information available among these tasks. For this, we propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks and the information shared among those partially-overlapped tasks. In our implementation of using the same neural network architecture of the learnt auxiliary task to learn individual tasks, the key idea is to utilize available label information to adaptively prune the hidden layer neurons of the auxiliary network to construct corresponding network for each task, while accompanying a joint learning across individual tasks. Our experimental results demonstrate its effectiveness in comparison with the state-of-the-art approaches.