论文标题
具有较大学习率的梯度下降的特殊特性
Special Properties of Gradient Descent with Large Learning Rates
论文作者
论文摘要
当训练神经网络时,已经广泛观察到,对于获得卓越模型的随机梯度下降(SGD)至关重要。但是,在理论上,大小规模对SGD成功的影响尚未得到很好的理解。以前的几项工作将这种成功归因于SGD中存在的随机噪声。但是,我们通过一组新的实验表明,随机噪声不足以解释良好的非凸训练,而是大学习率本身的效果对于获得最佳性能至关重要。我们在无噪声的情况下也表现出相同的效果,即用于全批量的GD。我们正式证明,具有较大步长的GD(在某些非凸功能类别上)遵循的轨迹与较小的步骤大小的GD不同,这可能导致收敛到全局最小值而不是本地最小值。我们的设置为将来的分析提供了一个框架,该框架允许根据在传统设置中无法观察到的行为进行比较算法。
When training neural networks, it has been widely observed that a large step size is essential in stochastic gradient descent (SGD) for obtaining superior models. However, the effect of large step sizes on the success of SGD is not well understood theoretically. Several previous works have attributed this success to the stochastic noise present in SGD. However, we show through a novel set of experiments that the stochastic noise is not sufficient to explain good non-convex training, and that instead the effect of a large learning rate itself is essential for obtaining best performance.We demonstrate the same effects also in the noise-less case, i.e. for full-batch GD. We formally prove that GD with large step size -- on certain non-convex function classes -- follows a different trajectory than GD with a small step size, which can lead to convergence to a global minimum instead of a local one. Our settings provide a framework for future analysis which allows comparing algorithms based on behaviors that can not be observed in the traditional settings.