论文标题

在通信约束下对时变的有向图的分散优化

Decentralized Optimization On Time-Varying Directed Graphs Under Communication Constraints

论文作者

Chen, Yiyue, Hashemi, Abolfazl, Vikalo, Haris

论文摘要

我们考虑了分散优化的问题,其中一个代理集合(每个代理都可以访问本地成本功能,在有定向的时间变化的网络上进行通信,并旨在最大程度地减少这些功能的总和。实际上,由于通信限制,代理之间可以交换的信息量受到限制。我们提出了一种用于分散式凸优化的通信效率算法,该算法依赖于网络中相邻代理之间交换的本地更新的稀疏。在有指示的网络中,消息稀疏改变了列 - 故事 - 在建立分散学习任务的收敛中起着重要作用的属性。我们提出了一个分散的优化方案,该方案依赖于混合矩阵的局部修改,并表明它可以实现$ \ Mathcal {O}(\ frac {\ frac {\ mathrm {ln} t} {\ sqrt {t}}}} {\ sqrt {t}})$在被考虑的设置中的融合利率。实验验证了理论结果并证明了所提出的算法的功效。

We consider the problem of decentralized optimization where a collection of agents, each having access to a local cost function, communicate over a time-varying directed network and aim to minimize the sum of those functions. In practice, the amount of information that can be exchanged between the agents is limited due to communication constraints. We propose a communication-efficient algorithm for decentralized convex optimization that rely on sparsification of local updates exchanged between neighboring agents in the network. In directed networks, message sparsification alters column-stochasticity -- a property that plays an important role in establishing convergence of decentralized learning tasks. We propose a decentralized optimization scheme that relies on local modification of mixing matrices, and show that it achieves $\mathcal{O}(\frac{\mathrm{ln}T}{\sqrt{T}})$ convergence rate in the considered settings. Experiments validate theoretical results and demonstrate efficacy of the proposed algorithm.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源