论文标题

MRKL系统:一种模块化的神经符号结构,结合了大型语言模型,外部知识来源和离散推理

MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning

论文作者

Karpas, Ehud, Abend, Omri, Belinkov, Yonatan, Lenz, Barak, Lieber, Opher, Ratner, Nir, Shoham, Yoav, Bata, Hofit, Levine, Yoav, Leyton-Brown, Kevin, Muhlgay, Dor, Rozen, Noam, Schwartz, Erez, Shachaf, Gal, Shalev-Shwartz, Shai, Shashua, Amnon, Tenenholtz, Moshe

论文摘要

庞大的语言模型(LMS)迎来了AI的新时代,并成为通往基于自然语言的知识任务的门户。尽管LMS也在多种方式上固有地限制了现代AI的基本要素。我们讨论这些限制以及如何通过采用系统方法来避免它们。将挑战概念化为涉及知识和推理的挑战,除了语言处理外,我们还定义了具有多种神经模型的灵活体系结构,并由离散的知识和推理模块互补。我们描述了这种神经符号结构,称为模块化推理,知识和语言(MRKL,发音为“奇迹”)系统,以及实施IT的一些技术挑战,而AI21 Labs的MRKL系统实现了侏罗纪X。

Huge language models (LMs) have ushered in a new era for AI, serving as a gateway to natural-language-based knowledge tasks. Although an essential element of modern AI, LMs are also inherently limited in a number of ways. We discuss these limitations and how they can be avoided by adopting a systems approach. Conceptualizing the challenge as one that involves knowledge and reasoning in addition to linguistic processing, we define a flexible architecture with multiple neural models, complemented by discrete knowledge and reasoning modules. We describe this neuro-symbolic architecture, dubbed the Modular Reasoning, Knowledge and Language (MRKL, pronounced "miracle") system, some of the technical challenges in implementing it, and Jurassic-X, AI21 Labs' MRKL system implementation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源