论文标题
监视您的邻居:上下文嵌入的细粒度探测有关周围单词的信息
Spying on your neighbors: Fine-grained probing of contextual embeddings for information about surrounding words
论文作者
论文摘要
尽管使用上下文单词嵌入的模型已经在许多NLP任务上实现了最先进的结果,但是这些嵌入的信息确切的信息鲜为人知。为了解决这个问题,我们介绍了一套探测任务,这些任务可以对上下文嵌入进行细粒度测试,以编码有关周围单词的信息。 We apply these tasks to examine the popular BERT, ELMo and GPT contextual encoders, and find that each of our tested information types is indeed encoded as contextual information across tokens, often with near-perfect recoverability-but the encoders vary in which features they distribute to which tokens, how nuanced their distributions are, and how robust the encoding of each feature is to distance.我们讨论了这些结果的含义,即在构造令牌嵌入时如何使用不同类型的模型分解和优先级的单词级上下文信息。
Although models using contextual word embeddings have achieved state-of-the-art results on a host of NLP tasks, little is known about exactly what information these embeddings encode about the context words that they are understood to reflect. To address this question, we introduce a suite of probing tasks that enable fine-grained testing of contextual embeddings for encoding of information about surrounding words. We apply these tasks to examine the popular BERT, ELMo and GPT contextual encoders, and find that each of our tested information types is indeed encoded as contextual information across tokens, often with near-perfect recoverability-but the encoders vary in which features they distribute to which tokens, how nuanced their distributions are, and how robust the encoding of each feature is to distance. We discuss implications of these results for how different types of models breakdown and prioritize word-level context information when constructing token embeddings.