论文标题
序列的在线持续学习
Online Continual Learning on Sequences
论文作者
论文摘要
在线持续学习(OCL)是指系统从连续数据流中学习的能力,而无需重新访问以前遇到的培训样本。在单个数据通过中不断学习对于在不断变化的环境中运行的代理和机器人至关重要,并且需要从非i.i.d获取,微调和转移日益复杂的表示。输入分布。解决OCL的机器学习模型必须减轻\ textit {灾难性遗忘},其中从新颖的输入流学习时,隐藏的表示形式被中断或完全覆盖。在本章中,我们总结并讨论了最新的深度学习模型,这些模型通过使用(和组合)突触正则化,结构可塑性和经验重播来解决OCL的顺序输入。已经提出了不同的重播实现,可以通过重新出现(潜在表示)输入序列来减轻连接主义者体系结构中的灾难性遗忘,并且功能上类似于哺乳动物大脑中海马重放的机制。经验证据表明,具有经验重播的体系结构通常超过(在线)增量学习任务的架构。
Online continual learning (OCL) refers to the ability of a system to learn over time from a continuous stream of data without having to revisit previously encountered training samples. Learning continually in a single data pass is crucial for agents and robots operating in changing environments and required to acquire, fine-tune, and transfer increasingly complex representations from non-i.i.d. input distributions. Machine learning models that address OCL must alleviate \textit{catastrophic forgetting} in which hidden representations are disrupted or completely overwritten when learning from streams of novel input. In this chapter, we summarize and discuss recent deep learning models that address OCL on sequential input through the use (and combination) of synaptic regularization, structural plasticity, and experience replay. Different implementations of replay have been proposed that alleviate catastrophic forgetting in connectionists architectures via the re-occurrence of (latent representations of) input sequences and that functionally resemble mechanisms of hippocampal replay in the mammalian brain. Empirical evidence shows that architectures endowed with experience replay typically outperform architectures without in (online) incremental learning tasks.