论文标题
通过原始平均势头:非凸优化的理论见解和学习率计划
Momentum via Primal Averaging: Theoretical Insights and Learning Rate Schedules for Non-Convex Optimization
论文作者
论文摘要
现在,动量方法在机器学习社区中普遍使用,用于培训非凸模型,例如深神经网络。从经验上讲,他们执行传统的随机梯度下降(SGD)方法。在这项工作中,我们通过使用同等重写(称为随机原始平均形式(SPA)形式的方法的同等重写,对具有动量(SGD+M)的SGD进行了Lyapunov分析。该分析比非凸案例中的先前理论更紧密,因此,我们能够对SGD+M何时表现出色的SGD以及哪些超参数计划起作用以及原因进行精确见解。
Momentum methods are now used pervasively within the machine learning community for training non-convex models such as deep neural networks. Empirically, they out perform traditional stochastic gradient descent (SGD) approaches. In this work we develop a Lyapunov analysis of SGD with momentum (SGD+M), by utilizing a equivalent rewriting of the method known as the stochastic primal averaging (SPA) form. This analysis is much tighter than previous theory in the non-convex case, and due to this we are able to give precise insights into when SGD+M may out-perform SGD, and what hyper-parameter schedules will work and why.