论文标题

通过学习合成数据来减少灾难性遗忘

Reducing catastrophic forgetting with learning on synthetic data

论文作者

Masarczyk, Wojciech, Tautkute, Ivona

论文摘要

灾难性遗忘是神经网络无法按顺序学习数据引起的问题。在学习了两个任务之后,第一个任务的性能大大下降。这是一个严重的劣势,可以防止事先知道并非所有对象类别都知道的真实生活中的许多深度学习应用;或数据更改需要对模型进行调整。为了减少此问题,我们研究了合成数据的使用,即我们回答一个问题:是否可以合成生成此类数据,并不导致灾难性遗忘?我们提出了一种通过元梯度在两步优化过程中生成此类数据的方法。我们对分裂数据集的实验结果表明,按顺序训练模型并不能导致灾难性遗忘。我们还表明,我们生成数据的方法对不同的学习方案是可靠的。

Catastrophic forgetting is a problem caused by neural networks' inability to learn data in sequence. After learning two tasks in sequence, performance on the first one drops significantly. This is a serious disadvantage that prevents many deep learning applications to real-life problems where not all object classes are known beforehand; or change in data requires adjustments to the model. To reduce this problem we investigate the use of synthetic data, namely we answer a question: Is it possible to generate such data synthetically which learned in sequence does not result in catastrophic forgetting? We propose a method to generate such data in two-step optimisation process via meta-gradients. Our experimental results on Split-MNIST dataset show that training a model on such synthetic data in sequence does not result in catastrophic forgetting. We also show that our method of generating data is robust to different learning scenarios.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源