论文标题
使用自然句子理解语言模型中的偏见
Using Natural Sentences for Understanding Biases in Language Models
论文作者
论文摘要
语言模型中偏见的评估通常仅限于合成生成的数据集。这种依赖性可以追溯到需要迅速式数据集以触发语言模型的特定行为的需求。在本文中,我们通过在Wikipedia中现实世界的自然句子中收集的职业提示数据集来解决这一差距。我们旨在了解在研究语言模型中使用性别占领偏见时使用基于模板的提示和自然句子提示之间的差异。我们发现偏见评估对模板提示的设计选择非常敏感,我们建议使用自然句子提示进行系统评估,以远离可能在观察中引入偏见的设计选择。
Evaluation of biases in language models is often limited to synthetically generated datasets. This dependence traces back to the need for a prompt-style dataset to trigger specific behaviors of language models. In this paper, we address this gap by creating a prompt dataset with respect to occupations collected from real-world natural sentences present in Wikipedia. We aim to understand the differences between using template-based prompts and natural sentence prompts when studying gender-occupation biases in language models. We find bias evaluations are very sensitive to the design choices of template prompts, and we propose using natural sentence prompts for systematic evaluations to step away from design choices that could introduce bias in the observations.