论文标题
您所有的损失属于贝叶斯
All your loss are belong to Bayes
论文作者
论文摘要
损失功能是机器学习的基石和大多数算法的起点。统计学和贝叶斯决策理论通过适当性贡献了过去几十年来引起的监督学习中的广泛可接受的损失,最受欢迎的选择属于这些损失(物流,方形,哑光等)。最近,在训练模型本身的同时,并没有做出潜在偏见的损失选择损失的选择。关键方法符合规范链接,该函数将闭合单位间隔与R相关联,可以通过集成提供适当的损失。在本文中,我们依靠对适当的复合损失的更广泛的看法和信息几何形状,源函数的最新结构,其拟合减轻了规范链接所面临的约束。我们在平方的高斯过程上介绍了一个技巧,以获得一个随机过程,其路径在链接估计的背景下具有许多理想的属性。实验结果表明,对艺术的状态进行了实质性改进。
Loss functions are a cornerstone of machine learning and the starting point of most algorithms. Statistics and Bayesian decision theory have contributed, via properness, to elicit over the past decades a wide set of admissible losses in supervised learning, to which most popular choices belong (logistic, square, Matsushita, etc.). Rather than making a potentially biased ad hoc choice of the loss, there has recently been a boost in efforts to fit the loss to the domain at hand while training the model itself. The key approaches fit a canonical link, a function which monotonically relates the closed unit interval to R and can provide a proper loss via integration. In this paper, we rely on a broader view of proper composite losses and a recent construct from information geometry, source functions, whose fitting alleviates constraints faced by canonical links. We introduce a trick on squared Gaussian Processes to obtain a random process whose paths are compliant source functions with many desirable properties in the context of link estimation. Experimental results demonstrate substantial improvements over the state of the art.