论文标题
实体链接100种语言
Entity Linking in 100 Languages
论文作者
论文摘要
我们为多语言实体链接提出了一种新的公式,其中特定于语言的链接解决了语言敏锐的知识基础。我们在这种新环境中培训双重编码器,以先前的工作为基础,具有改进的功能表示,负面采矿和辅助实体对任务,以获取一个涵盖100多种语言和2000万个实体的单个实体检索模型。该模型的表现优于最先进的跨语义链接任务,因此最先进的是。稀有实体和低资源语言在这个大规模的情况下提出了挑战,因此我们倡导将重点放在零和少数评估上。为此,我们提供了MEWSLI-9,这是一个与我们的设置相匹配的大型新型多语言数据集(http://goo.gle/mewsli-dataset),并显示基于频率的分析如何为我们的模型和培训增强提供了关键的见解。
We propose a new formulation for multilingual entity linking, where language-specific mentions resolve to a language-agnostic Knowledge Base. We train a dual encoder in this new setting, building on prior work with improved feature representation, negative mining, and an auxiliary entity-pairing task, to obtain a single entity retrieval model that covers 100+ languages and 20 million entities. The model outperforms state-of-the-art results from a far more limited cross-lingual linking task. Rare entities and low-resource languages pose challenges at this large-scale, so we advocate for an increased focus on zero- and few-shot evaluation. To this end, we provide Mewsli-9, a large new multilingual dataset (http://goo.gle/mewsli-dataset) matched to our setting, and show how frequency-based analysis provided key insights for our model and training enhancements.