论文标题
强烈凸出问题的自适应梯度方法的动态遗憾
Dynamic Regret of Adaptive Gradient Methods for Strongly Convex Problems
论文作者
论文摘要
Adagrad及其变体等自适应梯度算法在深度神经网络的培训中越来越受欢迎。尽管许多适合自适应方法的工作都集中在静态的遗憾上,以实现良好的遗憾保证,但对这些方法的动态遗憾分析尚不清楚。与静态的遗憾相反,动态遗憾被认为是绩效测量的更强大的概念,因为它明确阐明了环境的非平稳性。在本文中,我们通过动态遗憾的概念在一个强大的凸面设置中浏览了Adagrad(称为M-Adagrad)的一种变体,该概念衡量了在线学习者的性能,而不是随着时间的推移会改变的参考(最佳)解决方案。我们表现出了基本反映环境非平稳性的最小化序列的路径长度的遗憾。此外,我们通过利用每回合中学习者的多个访问权限来增强动态遗憾。经验结果表明,M-Adagrad在实践中也有效。
Adaptive gradient algorithms such as ADAGRAD and its variants have gained popularity in the training of deep neural networks. While many works as for adaptive methods have focused on the static regret as a performance metric to achieve a good regret guarantee, the dynamic regret analyses of these methods remain unclear. As opposed to the static regret, dynamic regret is considered to be a stronger concept of performance measurement in the sense that it explicitly elucidates the non-stationarity of the environment. In this paper, we go through a variant of ADAGRAD (referred to as M-ADAGRAD ) in a strong convex setting via the notion of dynamic regret, which measures the performance of an online learner against a reference (optimal) solution that may change over time. We demonstrate a regret bound in terms of the path-length of the minimizer sequence that essentially reflects the non-stationarity of environments. In addition, we enhance the dynamic regret bound by exploiting the multiple accesses of the gradient to the learner in each round. Empirical results indicate that M-ADAGRAD works also well in practice.