论文标题
具有随机权重的回声状态网络的通用性和近似界限
Universality and approximation bounds for echo state networks with random weights
论文作者
论文摘要
我们研究具有随机生成内部权重的回声状态网络的均匀近似。这些模型在训练过程中仅优化了读数权重,在学习动态系统方面取得了经验成功。最近的结果表明,具有RELU激活的回声状态网络是通用的。在本文中,我们提供了一种替代性构造,并证明了通用性具有一般激活功能。具体而言,我们的主要结果表明,在激活函数的某些条件下,存在一个内部权重的采样过程,以便ECHO状态网络可以近似具有较高概率的任何连续的偶然时间不变的操作员。特别是,为了激活,我们为这些采样程序提供了明确的结构。我们还量化了足够常规运算符的构建的Relu Echo状态网络的近似误差。
We study the uniform approximation of echo state networks with randomly generated internal weights. These models, in which only the readout weights are optimized during training, have made empirical success in learning dynamical systems. Recent results showed that echo state networks with ReLU activation are universal. In this paper, we give an alternative construction and prove that the universality holds for general activation functions. Specifically, our main result shows that, under certain condition on the activation function, there exists a sampling procedure for the internal weights so that the echo state network can approximate any continuous casual time-invariant operators with high probability. In particular, for ReLU activation, we give explicit construction for these sampling procedures. We also quantify the approximation error of the constructed ReLU echo state networks for sufficiently regular operators.