论文标题

黎曼自然梯度方法

Riemannian Natural Gradient Methods

论文作者

Hu, Jiang, Ao, Ruicheng, So, Anthony Man-Cho, Yang, Minghan, Wen, Zaiwen

论文摘要

本文研究了关于Riemannian流形的大规模优化问题,其目标函数是负面概要损失的有限总和。这些问题在各种机器学习和信号处理应用中出现。通过在歧管环境中引入Fisher信息矩阵的概念,我们提出了一种新型的Riemannian自然梯度方法,可以将其视为自然梯度方法的自然扩展,从欧几里得环境到歧管设置。我们在标准假设下建立了我们所提出的方法的几乎纯净的全球融合。此外,我们表明,如果损失函数满足某些凸度和平滑度条件,并且输入输出图满足了雅各布式的稳定条件,那么我们提出的方法享有局部线性 - 或在Riemannian jacobian的Lipschitz连续性下,投入输入映射的jacobian,甚至是二次转化的速率 - 速率。然后,我们证明,如果网络的宽度足够大,则具有两层完全连接的神经网络,将满足利曼尼亚雅各布稳定性条件,并具有很高的概率。这证明了我们的收敛率结果的实际相关性。对机器学习产生的应用的数值实验证明了所提出的方法比最先进的方法的优势。

This paper studies large-scale optimization problems on Riemannian manifolds whose objective function is a finite sum of negative log-probability losses. Such problems arise in various machine learning and signal processing applications. By introducing the notion of Fisher information matrix in the manifold setting, we propose a novel Riemannian natural gradient method, which can be viewed as a natural extension of the natural gradient method from the Euclidean setting to the manifold setting. We establish the almost-sure global convergence of our proposed method under standard assumptions. Moreover, we show that if the loss function satisfies certain convexity and smoothness conditions and the input-output map satisfies a Riemannian Jacobian stability condition, then our proposed method enjoys a local linear -- or, under the Lipschitz continuity of the Riemannian Jacobian of the input-output map, even quadratic -- rate of convergence. We then prove that the Riemannian Jacobian stability condition will be satisfied by a two-layer fully connected neural network with batch normalization with high probability, provided that the width of the network is sufficiently large. This demonstrates the practical relevance of our convergence rate result. Numerical experiments on applications arising from machine learning demonstrate the advantages of the proposed method over state-of-the-art ones.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源