论文标题
Nesterov的加速方法与Halpern固定点迭代之间的联系
The Connection Between Nesterov's Accelerated Methods and Halpern Fixed-Point Iterations
论文作者
论文摘要
我们在Nesterov的加速一阶算法和Halpern固定点迭代方案之间得出了直接连接,用于近似于近似联合练习方程的解决方案。我们表明,一种方法通过简单的转换等效于另一种方法,从而为Nesterov的加速方案提供了简单的收敛证明。另外,我们直接建立了Nesterov加速变体的收敛速率,因此,我们获得了Halpern固定点迭代的新收敛速率。接下来,我们将结果应用于不同的方法,例如近端点,前后拆分和三操作分裂方法,用于求解单调包含物,在这些方法中,我们的收敛保证适用。由于远期方案需要基础运算符的共振性,因此我们为近期锚定的促成梯度和过去的锚定梯度方法提供了新的Nesterov的加速变体,可以避免避免共晶的假设。对于锚定的梯度方法,我们考虑了单调案例中[43]中研究的两个变体,以及[21]中的共单酮案例。后者作为特殊情况覆盖了前者,并且收敛速度更高。对于过去的锚定梯度方法(乐观梯度方法的加速变体)[41],我们获得了一个新的Nesterov的加速变体,具有两个过去的近介质校正项,可以帮助开发New Nesterov的Minimax问题的加速方法,并连续视图。我们在两个数值示例上测试了理论结果,其中实际收敛速率与理论速率非常匹配到一个恒定因素。
We derive a direct connection between Nesterov's accelerated first-order algorithm and the Halpern fixed-point iteration scheme for approximating a solution of a co-coercive equation. We show that one method is equivalent to another via a simple transformation, leading to a simple convergence proof for Nesterov's accelerated scheme. Alternatively, we directly establish convergence rates of Nesterov's accelerated variant, and as a consequence, we obtain a new convergence rate of the Halpern fixed-point iteration. Next, we apply our results to different methods such as proximal-point, forwarb-backward splitting, and three-operator splitting methods for solving monotone inclusions, where our convergence guarantees are applicable. Since the forward scheme requires the co-coerciveness of the underlying operator, we derive new Nesterov's accelerated variants for both recent extra-anchored gradient and past-extra anchored gradient methods which can avoid the co-coerciveness assumption. For the extra-anchored gradient method, we consider two variants studied in [43] for the monotone case and in [21] for the co-monotone case. The latter covers the former as a special case and has a tighter convergence rate. For the past-extra anchored gradient method (an accelerated variant of the optimistic gradient method) [41], we obtain a new Nesterov's accelerated variant with two past-iterate correction terms that may help to develop new Nesterov's accelerated methods for minimax problems and their continuous views. We test our theoretical results on two numerical examples, where the actual convergence rates match well the theoretical rates up to a constant factor.