论文标题
迭代预先调节以加快梯度探测方法
Iterative Pre-Conditioning to Expedite the Gradient-Descent Method
论文作者
论文摘要
本文考虑了多代理分布式优化的问题。在此问题中,系统中有多个代理,每个代理只知道其本地成本功能。代理商的目的是集体计算其所有当地成本功能的共同最小值。原则上,使用传统梯度散发方法的分布式变体可以解决这个问题,这是一种迭代方法。但是,传统梯度降低方法的收敛速度受到解决的优化问题的条件的高度影响。具体而言,如果优化问题不足,该方法需要大量迭代才能收敛到解决方案。 在本文中,我们提出了一种迭代预处理方法,该方法可以显着减轻问题调节对梯度散发方法收敛速度的影响。提出的预处理方法可以轻松地在分布式系统中实现,并具有最小的计算和通信开销。目前,我们仅考虑一个特定的分布式优化问题,其中代理的各个局部成本功能是二次的。除了理论保证外,通过对实际数据集的实验,可以证明我们方法的收敛速度的提高。
This paper considers the problem of multi-agent distributed optimization. In this problem, there are multiple agents in the system, and each agent only knows its local cost function. The objective for the agents is to collectively compute a common minimum of the aggregate of all their local cost functions. In principle, this problem is solvable using a distributed variant of the traditional gradient-descent method, which is an iterative method. However, the speed of convergence of the traditional gradient-descent method is highly influenced by the conditioning of the optimization problem being solved. Specifically, the method requires a large number of iterations to converge to a solution if the optimization problem is ill-conditioned. In this paper, we propose an iterative pre-conditioning approach that can significantly attenuate the influence of the problem's conditioning on the convergence-speed of the gradient-descent method. The proposed pre-conditioning approach can be easily implemented in distributed systems and has minimal computation and communication overhead. For now, we only consider a specific distributed optimization problem wherein the individual local cost functions of the agents are quadratic. Besides the theoretical guarantees, the improved convergence speed of our approach is demonstrated through experiments on a real data-set.