论文标题
多种可控文本生成的分销镜头
A Distributional Lens for Multi-Aspect Controllable Text Generation
论文作者
论文摘要
与单一相比控制相比,多光值可控文本生成更具挑战性和实用性。现有的方法通过融合从单一观察中学到的多个控制器来实现复杂的多观察控制,但遭受了这些控制器的相互干扰引起的属性变性。为了解决这个问题,我们从分布的角度提供了有关属性融合的观察结果,并建议直接搜索多个属性分布的交叉区域作为生成的组合。我们的方法首先用自动编码器结构估算属性空间。之后,我们通过将距离共同将距离最小化到代表不同属性的点来迭代接近交叉点。最后,我们将它们映射到具有基于前缀的解码器的属性相关句子。关于三项范围控制任务的实验,包括情绪,主题和排毒方面,表明我们的方法在属性相关性和文本质量上优于几个强大的基准,并实现了SOTA。进一步的分析还为我们方法的有效性提供了一些解释性支持。
Multi-aspect controllable text generation is a more challenging and practical task than single-aspect control. Existing methods achieve complex multi-aspect control by fusing multiple controllers learned from single-aspect, but suffer from attribute degeneration caused by the mutual interference of these controllers. To address this, we provide observations on attribute fusion from a distributional perspective and propose to directly search for the intersection areas of multiple attribute distributions as their combination for generation. Our method first estimates the attribute space with an autoencoder structure. Afterward, we iteratively approach the intersections by jointly minimizing distances to points representing different attributes. Finally, we map them to attribute-relevant sentences with a prefix-tuning-based decoder. Experiments on the three-aspect control task, including sentiment, topic, and detoxification aspects, reveal that our method outperforms several strong baselines on attribute relevance and text quality and achieves the SOTA. Further analysis also supplies some explanatory support for the effectiveness of our approach.