论文标题

锥形映射的固定点,并应用于神经网络

Fixed Points of Cone Mapping with the Application to Neural Networks

论文作者

Gabor, Grzegorz, Rykaczewski, Krzysztof

论文摘要

我们在不假定功能可伸缩性的情况下得出了存在锥映射的固定点的条件。在搜索固定点映射点的背景下,文献中通常是不可分割的。在应用中,这种映射通过非阴性神经网络近似。但是,事实证明,训练非负网络的过程需要对模型的权重施加人为的限制。但是,在特定的非负数据的情况下,不能说如果映射是非负的,则仅具有非负权重。因此,我们考虑了一般神经网络存在固定点的问题,假设相对于特定锥体有相似条件的条件。这不会放松物理假设,因为即使假设输入和输出是非负的,权重也可以(小)小于零值。这种特性(通常在有关神经网络权重的解释性的论文中发现)导致有关与神经网络相关的映射的单调性或可扩展性的假设的削弱。据我们所知,本文是第一个研究这种现象的文章。

We derive conditions for the existence of fixed points of cone mappings without assuming scalability of functions. Monotonicity and scalability are often inseparable in the literature in the context of searching for fixed points of interference mappings. In applications, such mappings are approximated by non-negative neural networks. It turns out, however, that the process of training non-negative networks requires imposing an artificial constraint on the weights of the model. However, in the case of specific non-negative data, it cannot be said that if the mapping is non-negative, it has only non-negative weights. Therefore, we considered the problem of the existence of fixed points for general neural networks, assuming the conditions of tangency conditions with respect to specific cones. This does not relax the physical assumptions, because even assuming that the input and output are to be non-negative, the weights can have (small, but) less than zero values. Such properties (often found in papers on the interpretability of weights of neural networks) lead to the weakening of the assumptions about the monotonicity or scalability of the mapping associated with the neural network. To the best of our knowledge, this paper is the first to study this phenomenon.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源