论文标题
重新思考经验重播:一袋持续学习的技巧
Rethinking Experience Replay: a Bag of Tricks for Continual Learning
论文作者
论文摘要
在持续学习中,对神经网络进行了培训,该数据的分布会随着时间而变化。在这些假设下,改善班级以后出现的课程,同时保持准确的课程,这一点尤其具有挑战性。这是由于灾难性遗忘的臭名昭著的问题,当分类器专注于学习新类别时,这会导致性能快速退化。最近的文献提出了解决此问题的各种方法,通常诉诸非常复杂的技术。在这项工作中,我们表明可以修补幼稚的排练以实现相似的性能。我们指出了一些限制经验重播(ER)的缺点,并提出了五个技巧来减轻它们。实验表明,ER因此增强了,在CIFAR-10和CIFAR-100数据集(内存缓冲区尺寸1000)上分别显示出51.2和26.9个百分点的精度增益。结果,它超过了最新的基于排练的方法。
In Continual Learning, a Neural Network is trained on a stream of data whose distribution shifts over time. Under these assumptions, it is especially challenging to improve on classes appearing later in the stream while remaining accurate on previous ones. This is due to the infamous problem of catastrophic forgetting, which causes a quick performance degradation when the classifier focuses on learning new categories. Recent literature proposed various approaches to tackle this issue, often resorting to very sophisticated techniques. In this work, we show that naive rehearsal can be patched to achieve similar performance. We point out some shortcomings that restrain Experience Replay (ER) and propose five tricks to mitigate them. Experiments show that ER, thus enhanced, displays an accuracy gain of 51.2 and 26.9 percentage points on the CIFAR-10 and CIFAR-100 datasets respectively (memory buffer size 1000). As a result, it surpasses current state-of-the-art rehearsal-based methods.