论文标题
机器学习中的张量网络
Tensor networks in machine learning
论文作者
论文摘要
张量网络是一种用于表达和近似大量数据的分解类型。给定的数据集,量子状态或更高维度多线性图的分组并通过较小的多线性图的组成来估算并近似。这让人联想到如何将布尔函数分解为栅极阵列:这代表了张量分解的特殊情况,其中张量输入的条目被0、1替换,并且分解化精确。相关技术的收集被称为张量网络方法:在几个不同的研究领域中独立开发的主题,最近通过张量网络的语言与之相互关联。该领域中的Tantamount问题涉及张量网络的可表达性和减少计算开销。张量网络与机器学习的合并是自然的。一方面,机器学习可以帮助确定近似数据集的张量网络的分解。另一方面,给定的张量网络结构可以看作是机器学习模型。此处,调整了张量网络参数以学习或对数据集进行分类。在这项调查中,我们恢复了张量网络的基础知识,并解释了开发机器学习中张量网络理论的持续努力。
A tensor network is a type of decomposition used to express and approximate large arrays of data. A given data-set, quantum state or higher dimensional multi-linear map is factored and approximated by a composition of smaller multi-linear maps. This is reminiscent to how a Boolean function might be decomposed into a gate array: this represents a special case of tensor decomposition, in which the tensor entries are replaced by 0, 1 and the factorisation becomes exact. The collection of associated techniques are called, tensor network methods: the subject developed independently in several distinct fields of study, which have more recently become interrelated through the language of tensor networks. The tantamount questions in the field relate to expressability of tensor networks and the reduction of computational overheads. A merger of tensor networks with machine learning is natural. On the one hand, machine learning can aid in determining a factorization of a tensor network approximating a data set. On the other hand, a given tensor network structure can be viewed as a machine learning model. Herein the tensor network parameters are adjusted to learn or classify a data-set. In this survey we recover the basics of tensor networks and explain the ongoing effort to develop the theory of tensor networks in machine learning.