论文标题
从积极信息中学习语言,并有限的许多内存更改
Learning Languages in the Limit from Positive Information with Finitely Many Memory Changes
论文作者
论文摘要
我们通过归纳推理计算机从文本中研究语言的学习收集,该计算机可以访问当前基准,并以状态形式进行有限的内存。这样的有限记忆状态(BMS)学习者被认为是成功的,以防最终在正确的假设上解决,同时仅利用有限的许多不同状态。 我们为所有成对关系提供了完整的图,以确定成功学习的标准集合。最突出的是,我们表明,非U形性不是限制性的,而保守性和(强度)则是单调性的。一些结果从一般引理的迭代学习中得出,表明,对于大量限制(语义限制),迭代和有限的记忆状态学习是等效的。我们还举例说明了两种设置不同的非语义限制(强烈非形状)。
We investigate learning collections of languages from texts by an inductive inference machine with access to the current datum and a bounded memory in form of states. Such a bounded memory states (BMS) learner is considered successful in case it eventually settles on a correct hypothesis while exploiting only finitely many different states. We give the complete map of all pairwise relations for an established collection of criteria of successfull learning. Most prominently, we show that non-U-shapedness is not restrictive, while conservativeness and (strong) monotonicity are. Some results carry over from iterative learning by a general lemma showing that, for a wealth of restrictions (the semantic restrictions), iterative and bounded memory states learning are equivalent. We also give an example of a non-semantic restriction (strongly non-U-shapedness) where the two settings differ.