论文标题
动态转移学习的统一元学习框架
A Unified Meta-Learning Framework for Dynamic Transfer Learning
论文作者
论文摘要
转移学习是指知识或信息从相关源任务转移到目标任务。但是,大多数现有作品都假设两个任务都是从固定任务分布中取样的,从而导致在实际场景中从非平稳任务分配中绘制的动态任务的次优性能。为了弥合这一差距,在本文中,我们研究了一个动态任务的更现实和具有挑战性的转移学习设置,即源和目标任务随着时间的流逝而不断发展。从理论上讲,我们可以表明,动态目标任务上的预期错误可以在跨任务之间的源知识和连续的分布差异方面紧密界定。这个结果促使我们提出了一个通用的元学习框架L2E,以建模动态任务上的知识传递性。它以一组任务的任务指导的元学习问题为中心,基于我们能够在最新目标任务上快速适应的先前模型初始化。 L2E享有以下属性:(1)跨动态任务的有效知识传递性; (2)快速适应新目标任务; (3)缓解历史目标任务的灾难性遗忘; (4)合并任何现有的静态转移学习算法的灵活性。对各种图像数据集的广泛实验证明了所提出的L2E框架的有效性。
Transfer learning refers to the transfer of knowledge or information from a relevant source task to a target task. However, most existing works assume both tasks are sampled from a stationary task distribution, thereby leading to the sub-optimal performance for dynamic tasks drawn from a non-stationary task distribution in real scenarios. To bridge this gap, in this paper, we study a more realistic and challenging transfer learning setting with dynamic tasks, i.e., source and target tasks are continuously evolving over time. We theoretically show that the expected error on the dynamic target task can be tightly bounded in terms of source knowledge and consecutive distribution discrepancy across tasks. This result motivates us to propose a generic meta-learning framework L2E for modeling the knowledge transferability on dynamic tasks. It is centered around a task-guided meta-learning problem with a group of meta-pairs of tasks, based on which we are able to learn the prior model initialization for fast adaptation on the newest target task. L2E enjoys the following properties: (1) effective knowledge transferability across dynamic tasks; (2) fast adaptation to the new target task; (3) mitigation of catastrophic forgetting on historical target tasks; and (4) flexibility in incorporating any existing static transfer learning algorithms. Extensive experiments on various image data sets demonstrate the effectiveness of the proposed L2E framework.