论文标题
MMD调查不平衡的最佳运输
MMD-Regularized Unbalanced Optimal Transport
论文作者
论文摘要
我们研究了不平衡的最佳运输(UOT)问题,其中使用最大平均差异(MMD)正则化来实施边缘约束。我们的工作是由观察到的,即有关UOT的文献专注于基于$ ϕ $ -Divergence(例如KL Divergence)的正规化。尽管MMD的流行,但在UOT的背景下,它作为正规化器的作用似乎不太了解。我们首先得出特定双重指定的UOT(MMD-UOT),这有助于我们证明几种有用的属性。这种二元性结果的一个有趣的结果是,MMD-UOT诱导了新型指标,该指标不仅取消了像Wasserstein这样的地面指标,而且还可以像MMD一样有效地估算样品。此外,对于涉及非差异措施的实际应用程序,我们提出了仅在给定($ M $)样本上支持的运输计划的估计器。在某些条件下,我们证明此有限支持的运输计划的估计错误也为$ \ Mathcal {O}(1/\ sqrt {M})$。据我们所知,从维度诅咒中没有$ ϕ $ -DDIVERGENCE正规化的UOT而闻名的错误界限。最后,我们讨论如何使用加速梯度下降如何有效地计算提出的估计器。我们的实验表明,在不同的机器学习应用中,MMD-UOT始终优于流行的基线,包括KL登记的UOT和MMD。
We study the unbalanced optimal transport (UOT) problem, where the marginal constraints are enforced using Maximum Mean Discrepancy (MMD) regularization. Our work is motivated by the observation that the literature on UOT is focused on regularization based on $ϕ$-divergence (e.g., KL divergence). Despite the popularity of MMD, its role as a regularizer in the context of UOT seems less understood. We begin by deriving a specific dual of MMD-regularized UOT (MMD-UOT), which helps us prove several useful properties. One interesting outcome of this duality result is that MMD-UOT induces novel metrics, which not only lift the ground metric like the Wasserstein but are also sample-wise efficient to estimate like the MMD. Further, for real-world applications involving non-discrete measures, we present an estimator for the transport plan that is supported only on the given ($m$) samples. Under certain conditions, we prove that the estimation error with this finitely-supported transport plan is also $\mathcal{O}(1/\sqrt{m})$. As far as we know, such error bounds that are free from the curse of dimensionality are not known for $ϕ$-divergence regularized UOT. Finally, we discuss how the proposed estimator can be computed efficiently using accelerated gradient descent. Our experiments show that MMD-UOT consistently outperforms popular baselines, including KL-regularized UOT and MMD, in diverse machine learning applications.