论文标题

后验改善提高了贝叶斯神经网络的样品效率

Posterior Refinement Improves Sample Efficiency in Bayesian Neural Networks

论文作者

Kristiadi, Agustinus, Eschenhagen, Runa, Hennig, Philipp

论文摘要

Monte Carlo(MC)整合是近似贝叶斯神经网络(BNN)的预测分布的事实方法。但是,即使有许多MC样品,由于后近似的误差,基于高斯的BNN仍然可以产生不良的预测性能。同时,MC集成的替代方案往往更昂贵或偏见。在这项工作中,我们通过实验表明,良好的MC及预测分布的关键是近似后部本身的质量。但是,以前获得准确后近似值的方法是昂贵的,实施的不平凡。因此,我们建议通过标准化流量来完善高斯近似后代。当应用于最后一层BNN时,它会产生一种简单的\ emph {post hoc}方法,用于改善预存在的参数近似值。我们表明,即使是金色标准的哈密顿蒙特卡洛,所产生的后近似也具有竞争力。

Monte Carlo (MC) integration is the de facto method for approximating the predictive distribution of Bayesian neural networks (BNNs). But, even with many MC samples, Gaussian-based BNNs could still yield bad predictive performance due to the posterior approximation's error. Meanwhile, alternatives to MC integration tend to be more expensive or biased. In this work, we experimentally show that the key to good MC-approximated predictive distributions is the quality of the approximate posterior itself. However, previous methods for obtaining accurate posterior approximations are expensive and non-trivial to implement. We, therefore, propose to refine Gaussian approximate posteriors with normalizing flows. When applied to last-layer BNNs, it yields a simple \emph{post hoc} method for improving pre-existing parametric approximations. We show that the resulting posterior approximation is competitive with even the gold-standard full-batch Hamiltonian Monte Carlo.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源