论文标题

贝叶斯先验与变异推理的惩罚之间的等效性

An Equivalence between Bayesian Priors and Penalties in Variational Inference

论文作者

Wolinski, Pierre, Charpiat, Guillaume, Ollivier, Yann

论文摘要

在机器学习中,通常是优化概率模型的参数,该参数由临时正规化项调节,该项会惩罚某些参数的值。正则术语在变化推断上是自然的,这是一种近似贝叶斯后期的可拖动方法:优化的损失包含近似后部和贝叶斯先验之间的kullback-leibler差异项。我们充分表征了根据此过程可能出现的正规化器,并提供了一种系统的方法来计算与给定罚款相对应的先前的。这样的表征可用于发现对罚款功能的约束,以使整体过程仍然是贝叶斯人。

In machine learning, it is common to optimize the parameters of a probabilistic model, modulated by an ad hoc regularization term that penalizes some values of the parameters. Regularization terms appear naturally in Variational Inference, a tractable way to approximate Bayesian posteriors: the loss to optimize contains a Kullback--Leibler divergence term between the approximate posterior and a Bayesian prior. We fully characterize the regularizers that can arise according to this procedure, and provide a systematic way to compute the prior corresponding to a given penalty. Such a characterization can be used to discover constraints over the penalty function, so that the overall procedure remains Bayesian.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源