论文标题

量子数据持续学习量子数据,实现知识向后转移

Quantum continual learning of quantum data realizing knowledge backward transfer

论文作者

Situ, Haozhen, Lu, Tianxiang, Pan, Minghua, Li, Lvzhou

论文摘要

为了实现可以模仿人类智能的强大人工智能的目标,AI系统将有能力适应不断变化的场景并连续地学习新知识,而不会忘记先前获得的知识。当对机器学习模型进行连续培训的多个任务时,其在以前学习的任务上的性能可能会在新见到的任务的学习过程中急剧下降。为了避免这种现象称为灾难性遗忘,已经提出了持续学习,也称为终身学习,并成为机器学习的最新研究领域之一。近年来,随着量子机学习的开花,开发量子持续学习很有趣。本文着重于用于量子数据的量子模型的情况,其中计算模型和要处理的数据都是量子。梯度情节记忆方法合并为设计一种量子连续学习方案,该方案克服了灾难性的遗忘,并实现了知识向后传递。具体而言,一系列量子状态分类任务是由差异量子分类器不断学习的,该量子分类器的参数通过经典的基于梯度的优化器进行了优化。当前任务的梯度被投影到最接近的梯度,避免了以前任务的损失增加,但可以减少。数值仿真结果表明,我们的方案不仅克服了灾难性的遗忘,而且还要实现知识向后转移,这意味着分类器在先前任务上的绩效得到了增强,而不是在学习新任务时受到损害。

For the goal of strong artificial intelligence that can mimic human-level intelligence, AI systems would have the ability to adapt to ever-changing scenarios and learn new knowledge continuously without forgetting previously acquired knowledge. When a machine learning model is consecutively trained on multiple tasks that come in sequence, its performance on previously learned tasks may drop dramatically during the learning process of the newly seen task. To avoid this phenomenon termed catastrophic forgetting, continual learning, also known as lifelong learning, has been proposed and become one of the most up-to-date research areas of machine learning. As quantum machine learning blossoms in recent years, it is interesting to develop quantum continual learning. This paper focuses on the case of quantum models for quantum data where the computation model and the data to be processed are both quantum. The gradient episodic memory method is incorporated to design a quantum continual learning scheme that overcomes catastrophic forgetting and realizes knowledge backward transfer. Specifically, a sequence of quantum state classification tasks is continually learned by a variational quantum classifier whose parameters are optimized by a classical gradient-based optimizer. The gradient of the current task is projected to the closest gradient, avoiding the increase of the loss at previous tasks, but allowing the decrease. Numerical simulation results show that our scheme not only overcomes catastrophic forgetting, but also realize knowledge backward transfer, which means the classifier's performance on previous tasks is enhanced rather than compromised while learning a new task.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源