论文标题
建立通用的连续知识基础
Towards a Universal Continuous Knowledge Base
论文作者
论文摘要
在人工智能(AI)中,知识是智能系统完成任务所需的信息。尽管传统知识库使用离散的符号表示,但在从数据中学到的连续表示中编码的知识最近受到了越来越多的关注。在这项工作中,我们提出了一种建立连续知识库(CKB)的方法,该方法可以存储从多个不同的神经网络中导入的知识。我们方法的关键思想是为每个神经网络定义一个接口,并将知识转移作为功能模拟问题。文本分类的实验显示出令人鼓舞的结果:CKB从单个模型中导入知识,然后将知识导出到新模型,从而实现了与原始模型相当的性能。更有趣的是,我们将知识从多个模型进口到知识库,从中,融合知识被导出到单个模型,比原始模型获得了更高的精度。借助CKB,实现知识蒸馏和转移学习也很容易。我们的工作为建立通用的连续知识基础打开了大门,以收集,存储和组织所有经过不同AI任务的神经网络中编码的连续知识。
In artificial intelligence (AI), knowledge is the information required by an intelligent system to accomplish tasks. While traditional knowledge bases use discrete, symbolic representations, detecting knowledge encoded in the continuous representations learned from data has received increasing attention recently. In this work, we propose a method for building a continuous knowledge base (CKB) that can store knowledge imported from multiple, diverse neural networks. The key idea of our approach is to define an interface for each neural network and cast knowledge transferring as a function simulation problem. Experiments on text classification show promising results: the CKB imports knowledge from a single model and then exports the knowledge to a new model, achieving comparable performance with the original model. More interesting, we import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model, achieving a higher accuracy than the original model. With the CKB, it is also easy to achieve knowledge distillation and transfer learning. Our work opens the door to building a universal continuous knowledge base to collect, store, and organize all continuous knowledge encoded in various neural networks trained for different AI tasks.