论文标题
对抗性持续学习
Adversarial Continual Learning
论文作者
论文摘要
持续的学习旨在学习新任务,而无需忘记以前学习的任务。我们假设表示序列以序列解决每个任务的表示,同时包含某些特定于任务的属性。我们表明,共同的功能明显不易忘记并提出一个新型混合持续学习框架,该框架学习了解决一系列任务所需的任务不变和特定任务特定功能的脱节表示。我们的模型结合了体系结构的增长,以防止忘记特定于任务的技能和经验重播方法来维护共享技能。我们证明我们的混合方法可以有效避免忘记,并证明它比基于架构的基于架构的方法和基于内存的方法都在逐步学习单个数据集以及图像分类中的多个数据集的序列。我们的代码可在\ url {https://github.com/facebookresearch/Adversarial-continual-learning}中找到。
Continual learning aims to learn new tasks without forgetting previously learned ones. We hypothesize that representations learned to solve each task in a sequence have a shared structure while containing some task-specific properties. We show that shared features are significantly less prone to forgetting and propose a novel hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features required to solve a sequence of tasks. Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills. We demonstrate our hybrid approach is effective in avoiding forgetting and show it is superior to both architecture-based and memory-based approaches on class incrementally learning of a single dataset as well as a sequence of multiple datasets in image classification. Our code is available at \url{https://github.com/facebookresearch/Adversarial-Continual-Learning}.