论文标题
基于中级实体的稀疏可解释表示学习
Intermediate Entity-based Sparse Interpretable Representation Learning
论文作者
论文摘要
可解释的实体表示(iers)是稀疏的嵌入,在该维度上是“可读的”,对应于细粒度实体类型和值是预测的概率,即给定实体是相应类型的。这些方法在零射击和低监督设置中表现良好。与标准密集的神经嵌入相比,这种可解释的表示可以允许分析和调试。但是,尽管微调稀疏,可解释的表示可以提高下游任务的准确性,但它破坏了在预训练中实现的维度的语义。我们可以在改善下游任务的预测性能的同时维护IER提供的可解释语义吗?为此,我们提出了基于中间实体的稀疏可解释表示学习(ITSIRL)。 Itsirl在生物医学任务上的先前IERS方面实现了提高的性能,同时通常保持“可解释性”及其支持模型调试的能力。后者的一部分是通过执行“反事实”细粒实体类型操作的能力来启用的,我们在这项工作中探讨了这项操作。最后,我们提出了一种构建基于实体类型的类原型的方法,以揭示我们模型学到的类的全局语义属性。
Interpretable entity representations (IERs) are sparse embeddings that are "human-readable" in that dimensions correspond to fine-grained entity types and values are predicted probabilities that a given entity is of the corresponding type. These methods perform well in zero-shot and low supervision settings. Compared to standard dense neural embeddings, such interpretable representations may permit analysis and debugging. However, while fine-tuning sparse, interpretable representations improves accuracy on downstream tasks, it destroys the semantics of the dimensions which were enforced in pre-training. Can we maintain the interpretable semantics afforded by IERs while improving predictive performance on downstream tasks? Toward this end, we propose Intermediate enTity-based Sparse Interpretable Representation Learning (ItsIRL). ItsIRL realizes improved performance over prior IERs on biomedical tasks, while maintaining "interpretability" generally and their ability to support model debugging specifically. The latter is enabled in part by the ability to perform "counterfactual" fine-grained entity type manipulation, which we explore in this work. Finally, we propose a method to construct entity type based class prototypes for revealing global semantic properties of classes learned by our model.