论文标题

无监督的文本样式转移的自行车一致的对抗自动编码器

Cycle-Consistent Adversarial Autoencoders for Unsupervised Text Style Transfer

论文作者

Huang, Yufang, Zhu, Wentao, Xiong, Deyi, Zhang, Yiye, Hu, Changjian, Xu, Feiyu

论文摘要

无监督的文本样式转移由于缺乏平行数据和内容保存困难而充满了挑战。在本文中,我们提出了一种新型的神经方法,用于无监督的文本样式转移,我们将其称为循环一致的对抗自动编码器(CAE),该自动编码器(CAE)受非平行数据训练。 CAE构成了三个基本组成部分:(1)LSTM自动编码器,这些自动编码器以一种样式对文本进行编码为潜在的表示,并将编码的表示形式解码为其原始文本或转移的表示形式,或转移的文本中的文本(2)风格的逆向样式转移,这些样式的转移使用了一种对敌方训练的发电机来转化一种风格的代表,以一种型号的代表,以一种型号的代表,以一种效率的代表,以一种型号的代表来构成一种源代表,并在某种程度上添加了一种体现,并在某种程度上将(3)代表一种体现,并在某种程度上转化了一种体现的conterents and Anterations(3)增强了对抗性风格转移网络在内容保存中的能力。具有这三个组件的整个CAE可以端到端训练。对两个广泛使用的公共数据集进行了广泛的实验和深入分析,一致地验证了拟议的CAE在样式转移和内容保存中对几个强大基线的有效性,这些基本线在四个自动评估指标和人类评估方面。

Unsupervised text style transfer is full of challenges due to the lack of parallel data and difficulties in content preservation. In this paper, we propose a novel neural approach to unsupervised text style transfer, which we refer to as Cycle-consistent Adversarial autoEncoders (CAE) trained from non-parallel data. CAE consists of three essential components: (1) LSTM autoencoders that encode a text in one style into its latent representation and decode an encoded representation into its original text or a transferred representation into a style-transferred text, (2) adversarial style transfer networks that use an adversarially trained generator to transform a latent representation in one style into a representation in another style, and (3) a cycle-consistent constraint that enhances the capacity of the adversarial style transfer networks in content preservation. The entire CAE with these three components can be trained end-to-end. Extensive experiments and in-depth analyses on two widely-used public datasets consistently validate the effectiveness of proposed CAE in both style transfer and content preservation against several strong baselines in terms of four automatic evaluation metrics and human evaluation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源