论文标题
迪斯科:用大语言模型提炼反事实
DISCO: Distilling Counterfactuals with Large Language Models
论文作者
论文摘要
经过反合增强数据的训练的模型学习任务因果结构的表示形式,从而实现了强大的概括。但是,对于大多数任务而言,高质量的反事实数据是稀缺的,并且不易大规模生成。当众包时,这种数据通常受到规模和多样性的限制;当使用监督方法生成时,扩展到新的反事实维度在计算上昂贵。在这项工作中,我们介绍了迪斯科(蒸馏反事实数据),这是一种自动在大规模生成高质量反事实数据的新方法。迪斯科工程师提示使用大型通用语言模型产生短语扰动。然后,特定于任务的教师模型过滤了这些世代,以提炼高质量的反事实数据。虽然任务不足,但我们将管道应用于自然语言推理的任务(NLI),并发现在诸如NLI压力测试之类的具有挑战性的评估中,相对较小的学生模型相对较小,与培训的模型相比,与培训的模型相比,使用迪斯科舞会生成的反事实训练(6%绝对)更强大(6%的绝对),并且在跨分布中得到了更好的推广(2%)。此外,在三个评估集中,反事实对之间的迪斯科增强模型的一致性要高10%,这表明迪斯科增强功能使模型能够更可靠地学习因果关系。我们的存储库可用:https://github.com/eric11eca/disco
Models trained with counterfactually augmented data learn representations of the causal structure of tasks, enabling robust generalization. However, high-quality counterfactual data is scarce for most tasks and not easily generated at scale. When crowdsourced, such data is typically limited in scale and diversity; when generated using supervised methods, it is computationally expensive to extend to new counterfactual dimensions. In this work, we introduce DISCO (DIStilled COunterfactual Data), a new method for automatically generating high quality counterfactual data at scale. DISCO engineers prompts to generate phrasal perturbations with a large general language model. Then, a task-specific teacher model filters these generations to distill high-quality counterfactual data. While task-agnostic, we apply our pipeline to the task of natural language inference (NLI) and find that on challenging evaluations such as the NLI stress test, comparatively smaller student models trained with DISCO generated counterfactuals are more robust (6% absolute) and generalize better across distributions (2%) compared to models trained without data augmentation. Furthermore, DISCO augmented models are 10% more consistent between counterfactual pairs on three evaluation sets, demonstrating that DISCO augmentation enables models to more reliably learn causal representations. Our repository is available at: https://github.com/eric11eca/disco