论文标题

关于卷积神经网络产生的图像规律性的注释

A Note on the Regularity of Images Generated by Convolutional Neural Networks

论文作者

Habring, Andreas, Holler, Martin

论文摘要

分析了卷积神经网络产生的图像的规律性,例如U-NET,生成网络或Deep Image Prie。在独立于解决方案的无限尺寸设置中,这表明这种形式为函数的图像始终是连续的,在某些情况下,甚至连连续可区分,这与通过跳跃不连续性在图像中广泛接受的图像中广泛接受的建模相矛盾。虽然此类陈述需要无限的维度设置,但通过考虑限制的限制,分辨率接近无穷大,与(离散的)神经网络的连接建立。实际上,本文的结果特别提供了分析证据,表明网络权重的基本L2正则化可能导致过度平滑的输出。

The regularity of images generated by convolutional neural networks, such as the U-net, generative networks, or the deep image prior, is analyzed. In a resolution-independent, infinite dimensional setting, it is shown that such images, represented as functions, are always continuous and, in some circumstances, even continuously differentiable, contradicting the widely accepted modeling of sharp edges in images via jump discontinuities. While such statements require an infinite dimensional setting, the connection to (discretized) neural networks used in practice is made by considering the limit as the resolution approaches infinity. As practical consequence, the results of this paper in particular provide analytical evidence that basic L2 regularization of network weights might lead to over-smoothed outputs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源