论文标题
sindhorn barycenter通过功能梯度下降
Sinkhorn Barycenter via Functional Gradient Descent
论文作者
论文摘要
在本文中,我们考虑了计算sindhorn差异下一组概率分布的重中心的问题。 该问题最近在各个领域(包括图形,学习和视觉)上找到了应用程序,因为它为汇总知识提供了有意义的机制。 与以前直接在概率度量空间中运行的方法不同,我们将Sinkhorn Barycenter问题作为不受约束的功能优化的实例,并开发出一种新型的功能梯度下降方法,称为Sinkhorn Descent(SD)。 我们证明,SD以均匀速率收敛到固定点,在合理的假设下,我们进一步表明它渐近地发现了Sinkhorn Barycenter问题的全球最小化器。此外,通过提供均值场分析,我们表明SD保留了经验措施的弱收敛性。 重要的是,SD的计算复杂性在尺寸$ d $中线性线性缩放,我们通过求解$ 100 $维的sindhorn barycenter问题来证明其可扩展性。
In this paper, we consider the problem of computing the barycenter of a set of probability distributions under the Sinkhorn divergence. This problem has recently found applications across various domains, including graphics, learning, and vision, as it provides a meaningful mechanism to aggregate knowledge. Unlike previous approaches which directly operate in the space of probability measures, we recast the Sinkhorn barycenter problem as an instance of unconstrained functional optimization and develop a novel functional gradient descent method named Sinkhorn Descent (SD). We prove that SD converges to a stationary point at a sublinear rate, and under reasonable assumptions, we further show that it asymptotically finds a global minimizer of the Sinkhorn barycenter problem. Moreover, by providing a mean-field analysis, we show that SD preserves the weak convergence of empirical measures. Importantly, the computational complexity of SD scales linearly in the dimension $d$ and we demonstrate its scalability by solving a $100$-dimensional Sinkhorn barycenter problem.