论文标题
关于学习率和Schrödinger运营商
On Learning Rates and Schrödinger Operators
论文作者
论文摘要
学习率也许是神经网络训练的最重要参数,更广泛地是在随机(非convex)优化中。因此,有许多有效但知之甚少的技术来调整学习率,包括学习率衰减,这是从逐渐降低的大初始学习率开始的。在本文中,我们对随机梯度下降(SGD)学习率的影响进行了一般的理论分析。我们的分析基于学习率依赖性随机微分方程(LR依赖性SDE)的使用,该方程是SGD的替代物。对于广泛的目标函数,我们为这种连续的SGD表述建立了线性收敛速率,突出了SGD学习率的基本重要性,并与梯度下降和随机梯度Langevin动力学形成鲜明对比。此外,我们通过分析Witten-Laplacian的频谱来获得最佳线性速率的显式表达,这是Schrödinger运算符与LR依赖性SDE相关的特殊情况。引人注目的是,该表达清楚地揭示了线性收敛速率对学习率的依赖性 - 由于广泛的非convex函数的学习速率趋于零,线性速率迅速降至零,而对于强凸功能而言,它保持恒定。基于非凸和凸问题之间的这种尖锐的区别,我们提供了使用学习率衰减进行非convex优化的益处的数学解释。
The learning rate is perhaps the single most important parameter in the training of neural networks and, more broadly, in stochastic (nonconvex) optimization. Accordingly, there are numerous effective, but poorly understood, techniques for tuning the learning rate, including learning rate decay, which starts with a large initial learning rate that is gradually decreased. In this paper, we present a general theoretical analysis of the effect of the learning rate in stochastic gradient descent (SGD). Our analysis is based on the use of a learning-rate-dependent stochastic differential equation (lr-dependent SDE) that serves as a surrogate for SGD. For a broad class of objective functions, we establish a linear rate of convergence for this continuous-time formulation of SGD, highlighting the fundamental importance of the learning rate in SGD, and contrasting to gradient descent and stochastic gradient Langevin dynamics. Moreover, we obtain an explicit expression for the optimal linear rate by analyzing the spectrum of the Witten-Laplacian, a special case of the Schrödinger operator associated with the lr-dependent SDE. Strikingly, this expression clearly reveals the dependence of the linear convergence rate on the learning rate -- the linear rate decreases rapidly to zero as the learning rate tends to zero for a broad class of nonconvex functions, whereas it stays constant for strongly convex functions. Based on this sharp distinction between nonconvex and convex problems, we provide a mathematical interpretation of the benefits of using learning rate decay for nonconvex optimization.