论文标题

神经普通微分方程的通用近似特性

Universal Approximation Property of Neural Ordinary Differential Equations

论文作者

Teshima, Takeshi, Tojo, Koichi, Ikeda, Masahiro, Ishikawa, Isao, Oono, Kenta

论文摘要

神经普通微分方程(节点)是一种可逆的神经网络架构,其自由形式的雅各布式和可拖动的雅各布式决定因素估计器的可用性。最近,节点的表示能力部分被发现:它们在某些条件下形成了连续地图的$ l^p $ - 宇宙近似值。但是,$ l^p $ - 宇宙性可能无法保证整个输入域的近似值,因为即使近似器在很大程度上与输入空间的小区域上的目标函数大不相同,也可能会保持。为了进一步揭示节点的潜力,我们显示了它们更强的近似属性,即$ \ sup $ - 单项近似近似差异的差异性。通过利用差异组的结构定理来显示,结果通过建立相当大的映射来补充现有文献,该映射可以以更强的保证能够近似。

Neural ordinary differential equations (NODEs) is an invertible neural network architecture promising for its free-form Jacobian and the availability of a tractable Jacobian determinant estimator. Recently, the representation power of NODEs has been partly uncovered: they form an $L^p$-universal approximator for continuous maps under certain conditions. However, the $L^p$-universality may fail to guarantee an approximation for the entire input domain as it may still hold even if the approximator largely differs from the target function on a small region of the input space. To further uncover the potential of NODEs, we show their stronger approximation property, namely the $\sup$-universality for approximating a large class of diffeomorphisms. It is shown by leveraging a structure theorem of the diffeomorphism group, and the result complements the existing literature by establishing a fairly large set of mappings that NODEs can approximate with a stronger guarantee.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源