论文标题
简单神经网络的近似固定点的统计保证
Statistical Guarantees for Approximate Stationary Points of Simple Neural Networks
论文作者
论文摘要
由于神经网络的统计保证通常仅限于复杂的目标功能的全球最佳选择,因此尚不清楚这些理论是否真的解释了神经网络管道的实际输出的性能。因此,本文的目的是使统计理论更接近实践。我们为简单的神经网络开发统计保证,这些神经网络恰好与对数因素与全球Optima相吻合,但适用于固定点和附近的点。这些结果支持了一个共同的观念,即神经网络不一定需要从数学角度优化全球。更普遍的是,尽管目前仅限于简单的神经网络,但我们的理论在数学术语中描述了神经网络的实际属性迈出了一步。
Since statistical guarantees for neural networks are usually restricted to global optima of intricate objective functions, it is not clear whether these theories really explain the performances of actual outputs of neural-network pipelines. The goal of this paper is, therefore, to bring statistical theory closer to practice. We develop statistical guarantees for simple neural networks that coincide up to logarithmic factors with the global optima but apply to stationary points and the points nearby. These results support the common notion that neural networks do not necessarily need to be optimized globally from a mathematical perspective. More generally, despite being limited to simple neural networks for now, our theories make a step forward in describing the practical properties of neural networks in mathematical terms.