论文标题
从混合对抗性的非对抗性情况下从数据中学习:找到助手并忽略巨魔
Learning from data in the mixed adversarial non-adversarial case: Finding the helpers and ignoring the trolls
论文作者
论文摘要
智能对话代理人和人类之间互动的承诺是,模型可以从这种反馈中学习以改进。不幸的是,野外的这种交流并不总是涉及良性或高质量的人类话语,并将包括订婚(助手),未接触甚至恶意用户(巨魔)的混合。在这项工作中,我们研究了如何在这种环境中进行强大的学习。我们引入了基准评估,即Safetymix,可以评估在各种对抗环境中学习安全语言与有毒语言的方法,以测试其稳健性。我们建议和分析几种缓解学习算法,这些算法在示例或用户级别上识别巨魔。我们的主要发现是,基于用户的方法考虑到巨魔用户将在多个示例中表现出对抗性行为,在我们的基准测试中的各种环境中都可以使用。然后,我们在部署期间收集的对话的进一步现实生活中测试这些方法,结果相似。
The promise of interaction between intelligent conversational agents and humans is that models can learn from such feedback in order to improve. Unfortunately, such exchanges in the wild will not always involve human utterances that are benign or of high quality, and will include a mixture of engaged (helpers) and unengaged or even malicious users (trolls). In this work we study how to perform robust learning in such an environment. We introduce a benchmark evaluation, SafetyMix, which can evaluate methods that learn safe vs. toxic language in a variety of adversarial settings to test their robustness. We propose and analyze several mitigating learning algorithms that identify trolls either at the example or at the user level. Our main finding is that user-based methods, that take into account that troll users will exhibit adversarial behavior across multiple examples, work best in a variety of settings on our benchmark. We then test these methods in a further real-life setting of conversations collected during deployment, with similar results.