论文标题
神经网络可以从原始语言数据中获得结构性偏见吗?
Can neural networks acquire a structural bias from raw linguistic data?
论文作者
论文摘要
我们评估BERT是一个广泛使用的神经网络用于句子处理,它通过对原始数据进行预处理而获得了对形成结构概括的感应偏见。我们进行了四个实验,以测试其对不同结构依赖性现象中结构概括与线性概括的偏爱。我们发现,BERT在4个经验域中的3个中进行了结构性概括---主题 - 辅助反转,反射性结合和动词时态检测在嵌入式条款中 - 但在NPI许可测试时进行线性概括。我们认为,这些结果是迄今为止人工学习者的最有力证据,即支持从原始数据中获得结构性偏见的主张。如果这个结论是正确的,则是暂定的证据表明,某些语言普遍性可以由没有先天偏见的学习者获得。但是,对人类语言获取的确切含义尚不清楚,因为人类从数据中学习语言的数据比BERT少得多。
We evaluate whether BERT, a widely used neural network for sentence processing, acquires an inductive bias towards forming structural generalizations through pretraining on raw data. We conduct four experiments testing its preference for structural vs. linear generalizations in different structure-dependent phenomena. We find that BERT makes a structural generalization in 3 out of 4 empirical domains---subject-auxiliary inversion, reflexive binding, and verb tense detection in embedded clauses---but makes a linear generalization when tested on NPI licensing. We argue that these results are the strongest evidence so far from artificial learners supporting the proposition that a structural bias can be acquired from raw data. If this conclusion is correct, it is tentative evidence that some linguistic universals can be acquired by learners without innate biases. However, the precise implications for human language acquisition are unclear, as humans learn language from significantly less data than BERT.