论文标题
对对抗性回归者的强大学习加速学习
Accelerated Learning with Robustness to Adversarial Regressors
论文作者
论文摘要
基于高阶动量的参数更新算法已经在训练机学习模型中看到了广泛的应用程序。最近,与各种方法的联系导致了具有加速学习保证的新学习算法的推导。但是,这种方法仅考虑了静态回归器的情况。参数更新算法非常需要,在存在对抗时变回归器的情况下,可以证明可以证明稳定,就像控制理论中的常见一样。在本文中,我们提出了一种新的离散时间算法,该算法1)通过利用自适应控制理论的洞察力,提供了在对抗性回归剂的情况下提供稳定性和渐近收敛的保证,并且提供了非反应加速学习的学习保证,可以保证从CONVEX优化中利用洞察力。特别是,我们的算法最多达到$ε$ sub -tim -tim -tim -tim -tim -tilde {\ Mathcal {o}}}(1/\sqrtε)$迭代时,当回归器是常数时 - 匹配较低的界限,匹配较低的界限,因为nesterov wy nesterov of $ω(1/\sqrtε)$,$ +(1/\sqrtε)$($ \ a $ \ foc)and $ \(1/c)。回归器时变化时的稳定性。我们为Nesterov的变体提供了数值实验,可证明Nesterov与随时间变化的回归变量以及使用流数据相变的模糊和噪声来恢复图像的问题。
High order momentum-based parameter update algorithms have seen widespread applications in training machine learning models. Recently, connections with variational approaches have led to the derivation of new learning algorithms with accelerated learning guarantees. Such methods however, have only considered the case of static regressors. There is a significant need for parameter update algorithms which can be proven stable in the presence of adversarial time-varying regressors, as is commonplace in control theory. In this paper, we propose a new discrete time algorithm which 1) provides stability and asymptotic convergence guarantees in the presence of adversarial regressors by leveraging insights from adaptive control theory and 2) provides non-asymptotic accelerated learning guarantees leveraging insights from convex optimization. In particular, our algorithm reaches an $ε$ sub-optimal point in at most $\tilde{\mathcal{O}}(1/\sqrtε)$ iterations when regressors are constant - matching lower bounds due to Nesterov of $Ω(1/\sqrtε)$, up to a $\log(1/ε)$ factor and provides guaranteed bounds for stability when regressors are time-varying. We provide numerical experiments for a variant of Nesterov's provably hard convex optimization problem with time-varying regressors, as well as the problem of recovering an image with a time-varying blur and noise using streaming data.