论文标题

及时条件的VAE:增强以任务为导向的对话中终身学习的生成重播

Prompt Conditioned VAE: Enhancing Generative Replay for Lifelong Learning in Task-Oriented Dialogue

论文作者

Zhao, Yingxiu, Zheng, Yinhe, Tian, Zhiliang, Gao, Chang, Yu, Bowen, Yu, Haiyang, Li, Yongbin, Sun, Jian, Zhang, Nevin L.

论文摘要

终身学习(LL)对于先进的面向任务的对话(TOD)系统至关重要。为了解决LL的灾难性遗忘问题,生成的重播方法被广泛用于将过去的知识与生成的伪样本巩固。但是,大多数现有的生成重播方法仅使用一个特定于任务的令牌来控制其模型。由于涉及的信息不足,该方案通常不足以限制生成模型。在本文中,我们提出了一种新颖的方法,即迅速进行终身学习(PCLL)条件的VAE,以通过合并任务的统计数据来增强生成重播。 PCLL用有条件的变异自动编码器捕获特定于任务的分布,以自然语言提示指导伪样本生成。此外,它利用蒸馏过程来通过减轻伪样品中的噪声来进一步巩固过去的知识。关于自然语言理解TOD系统任务的实验表明,PCLL在构建LL模型中的表现明显优于竞争基准。

Lifelong learning (LL) is vital for advanced task-oriented dialogue (ToD) systems. To address the catastrophic forgetting issue of LL, generative replay methods are widely employed to consolidate past knowledge with generated pseudo samples. However, most existing generative replay methods use only a single task-specific token to control their models. This scheme is usually not strong enough to constrain the generative model due to insufficient information involved. In this paper, we propose a novel method, prompt conditioned VAE for lifelong learning (PCLL), to enhance generative replay by incorporating tasks' statistics. PCLL captures task-specific distributions with a conditional variational autoencoder, conditioned on natural language prompts to guide the pseudo-sample generation. Moreover, it leverages a distillation process to further consolidate past knowledge by alleviating the noise in pseudo samples. Experiments on natural language understanding tasks of ToD systems demonstrate that PCLL significantly outperforms competitive baselines in building LL models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源