论文标题

基于连续的凸近似的分布式随机非凸优化和学习

Distributed Stochastic Nonconvex Optimization and Learning based on Successive Convex Approximation

论文作者

Di Lorenzo, Paolo, Scardapane, Simone

论文摘要

我们研究了多代理网络中的分布式随机非凸优化。我们引入了一种新型算法框架,以最小化光滑(可能是非convex)函数(代理的总和)以及凸(可能是非平滑齿)正常化程序的期望值之和的分布最小化。所提出的方法取决于连续的凸近似(SCA)技术,利用动态共识作为跟踪代理之间平均梯度的一种机制,并平均递归平均来恢复总和函数的预期梯度。建立了几乎确定与非凸问题的(固定)解决方案的收敛。最后,该方法应用于神经网络的分布随机训练。数值结果证实了理论上的主张,并说明了所提出的方法在文献中可用的其他方法方面的优势。

We study distributed stochastic nonconvex optimization in multi-agent networks. We introduce a novel algorithmic framework for the distributed minimization of the sum of the expected value of a smooth (possibly nonconvex) function (the agents' sum-utility) plus a convex (possibly nonsmooth) regularizer. The proposed method hinges on successive convex approximation (SCA) techniques, leveraging dynamic consensus as a mechanism to track the average gradient among the agents, and recursive averaging to recover the expected gradient of the sum-utility function. Almost sure convergence to (stationary) solutions of the nonconvex problem is established. Finally, the method is applied to distributed stochastic training of neural networks. Numerical results confirm the theoretical claims, and illustrate the advantages of the proposed method with respect to other methods available in the literature.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源