论文标题
知识捕获并重播持续学习
Knowledge Capture and Replay for Continual Learning
论文作者
论文摘要
深度神经网络已经在几个域中显示了希望,并且学习的数据(任务)特定信息隐含地存储在网络参数中。当将来的数据不再可用时,尤其是在连续的学习情况下,提取和利用编码知识表示至关重要。在这项工作中,我们介绍了{\ em抽认卡},它是{\ em Capture}网络的编码知识作为预定义随机图像模式的递归函数的视觉表示。在持续的学习情况下,抽认卡有助于防止灾难性的遗忘和巩固对所有任务的知识。仅在学习后续任务之前才需要构造抽认卡,因此,与之前训练的任务数量无关。我们证明了抽认卡在捕获学习的知识表示方面的功效(作为原始数据集的替代方案),并在各种持续学习任务上进行经验验证:重建,denoising,deNoing,任务信息的学习和新的现实学习分类,并使用多个异构基准数据集合使用。实验证据表明:(i)作为重播策略的抽认卡是{\ em任务不可知},(ii)的性能要比生成性重播更好,并且(iii)与情节性重播相当,没有其他内存开销。
Deep neural networks have shown promise in several domains, and the learned data (task) specific information is implicitly stored in the network parameters. Extraction and utilization of encoded knowledge representations are vital when data is no longer available in the future, especially in a continual learning scenario. In this work, we introduce {\em flashcards}, which are visual representations that {\em capture} the encoded knowledge of a network as a recursive function of predefined random image patterns. In a continual learning scenario, flashcards help to prevent catastrophic forgetting and consolidating knowledge of all the previous tasks. Flashcards need to be constructed only before learning the subsequent task, and hence, independent of the number of tasks trained before. We demonstrate the efficacy of flashcards in capturing learned knowledge representation (as an alternative to the original dataset) and empirically validate on a variety of continual learning tasks: reconstruction, denoising, task-incremental learning, and new-instance learning classification, using several heterogeneous benchmark datasets. Experimental evidence indicates that: (i) flashcards as a replay strategy is { \em task agnostic}, (ii) performs better than generative replay, and (iii) is on par with episodic replay without additional memory overhead.