论文标题
在线快速适应和知识积累:一种持续学习的新方法
Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning
论文作者
论文摘要
持续的学习研究代理商从任务流中学习的代理人而不忘记以前的任务,同时适应新任务。最近的两个持续学习场景开发了新的研究途径。在元学习学习中,该模型是预先训练的,以最大程度地减少对先前任务的灾难性遗忘。在连续的Meta学习中,目的是培训代理,以更快地通过适应来记住以前的任务。在其原始配方中,这两种方法都有局限性。我们站在他们的肩膀上,提出了更一般的方案,大阪,代理必须迅速解决新的(分发)任务,同时还需要快速记住。我们表明,在这种新情况下,当前的持续学习,元学习,元学习学习和持续的meta学习技术失败了。我们建议连续MAML,这是流行的MAML算法的在线扩展,作为这种情况的强大基准。我们从经验上表明,与上述方法相比,连续MAML更适合新方案,以及标准的持续学习和元学习方法。
Continual learning studies agents that learn from streams of tasks without forgetting previous ones while adapting to new ones. Two recent continual-learning scenarios have opened new avenues of research. In meta-continual learning, the model is pre-trained to minimize catastrophic forgetting of previous tasks. In continual-meta learning, the aim is to train agents for faster remembering of previous tasks through adaptation. In their original formulations, both methods have limitations. We stand on their shoulders to propose a more general scenario, OSAKA, where an agent must quickly solve new (out-of-distribution) tasks, while also requiring fast remembering. We show that current continual learning, meta-learning, meta-continual learning, and continual-meta learning techniques fail in this new scenario. We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario. We empirically show that Continual-MAML is better suited to the new scenario than the aforementioned methodologies, as well as standard continual learning and meta-learning approaches.