论文标题

词汇依赖性的回归:神经词汇化PCFGS

The Return of Lexical Dependencies: Neural Lexicalized PCFGs

论文作者

Zhu, Hao, Bisk, Yonatan, Neubig, Graham

论文摘要

在本文中,我们证明了$ \ textIt {上下文免费语法(CFG)基于语法诱导的方法,从建模词汇依赖项} $。这与最流行的语法诱导方法形成鲜明对比,语法诱导的重点是发现$ \ textIt {as} $构成$ \ textit {或} $依赖项。以前的方法是与这两种不同的句法形式主义(例如词汇化的PCFG)结婚,这使它们受到稀疏性的困扰,这使得它们不适合无监督的语法诱导。但是,在这项工作中,我们提出了词汇化PCFG的新型神经模型,使我们能够克服稀疏性问题并有效地诱导单个模型中的成分和依赖性。实验表明,这种统一的框架在两种表示方面都比单独建模两种形式主义的结果​​更强。代码可在https://github.com/neulab/neural-lpcfg上找到。

In this paper we demonstrate that $\textit{context free grammar (CFG) based methods for grammar induction benefit from modeling lexical dependencies}$. This contrasts to the most popular current methods for grammar induction, which focus on discovering $\textit{either}$ constituents $\textit{or}$ dependencies. Previous approaches to marry these two disparate syntactic formalisms (e.g. lexicalized PCFGs) have been plagued by sparsity, making them unsuitable for unsupervised grammar induction. However, in this work, we present novel neural models of lexicalized PCFGs which allow us to overcome sparsity problems and effectively induce both constituents and dependencies within a single model. Experiments demonstrate that this unified framework results in stronger results on both representations than achieved when modeling either formalism alone. Code is available at https://github.com/neulab/neural-lpcfg.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源