论文标题

kl发散中不精确的langevin算法和基于得分的生成模型的收敛性

Convergence of the Inexact Langevin Algorithm and Score-based Generative Models in KL Divergence

论文作者

Yang, Kaylee Yingxi, Wibisono, Andre

论文摘要

当利用估计的得分函数进行抽样时,我们研究了不精确的langevin动力学(ILD),不精确的Langevin算法(ILA)和基于得分的生成建模(SGM)。我们的重点在于建立稳定的偏见融合保证,以kullback-leibler(KL)差异。为了实现这些保证,我们强加了两个关键的假设:1)目标分布满足log-sobolev不等式(LSI),而2)分数估计器表现出有界的力矩生成函数(MGF)误差。值得注意的是,与现有文献中使用的$ l^\ infty $错误假设相比,我们采用的MGF错误假设更宽大。但是,它比最近工作中使用的$ l^2 $错误假设强,这通常会导致不稳定的界限。我们探讨了如何获得满足MGF错误假设的可证明准确的得分估计器的问题。具体而言,我们证明了一个基于内核密度估计的简单估计器可满足人口级别的高斯目标分布的MGF误差假设。

We study the Inexact Langevin Dynamics (ILD), Inexact Langevin Algorithm (ILA), and Score-based Generative Modeling (SGM) when utilizing estimated score functions for sampling. Our focus lies in establishing stable biased convergence guarantees in terms of the Kullback-Leibler (KL) divergence. To achieve these guarantees, we impose two key assumptions: 1) the target distribution satisfies the log-Sobolev inequality (LSI), and 2) the score estimator exhibits a bounded Moment Generating Function (MGF) error. Notably, the MGF error assumption we adopt is more lenient compared to the $L^\infty$ error assumption used in existing literature. However, it is stronger than the $L^2$ error assumption utilized in recent works, which often leads to unstable bounds. We explore the question of how to obtain a provably accurate score estimator that satisfies the MGF error assumption. Specifically, we demonstrate that a simple estimator based on kernel density estimation fulfills the MGF error assumption for sub-Gaussian target distribution, at the population level.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源