论文标题
随机体重分解改善了连续神经表示的训练
Random Weight Factorization Improves the Training of Continuous Neural Representations
论文作者
论文摘要
连续神经表示最近已成为信号的经典离散表示形式的强大而灵活的替代品。但是,培训他们以在多尺度信号中捕获细节很困难且计算上很昂贵。在这里,我们将随机权重分解作为简单的液位替换,用于在基于坐标的多层感知器(MLP)中进行参数化和初始化常规线性层,从而显着加速并改善了其训练。我们展示了这种分解如何改变潜在的损失格局,并有效地使网络中的每个神经元能够使用自己的自适应学习率学习。这不仅有助于减轻光谱偏差,而且还允许网络从差的初始化中迅速恢复并达到更好的局部最小值。我们证明了如何利用随机重量分解来改善各种任务上神经表示的训练,包括图像回归,形状表示,计算机断层扫描,逆渲染,求解偏微分方程以及功能空间之间的学习操作员。
Continuous neural representations have recently emerged as a powerful and flexible alternative to classical discretized representations of signals. However, training them to capture fine details in multi-scale signals is difficult and computationally expensive. Here we propose random weight factorization as a simple drop-in replacement for parameterizing and initializing conventional linear layers in coordinate-based multi-layer perceptrons (MLPs) that significantly accelerates and improves their training. We show how this factorization alters the underlying loss landscape and effectively enables each neuron in the network to learn using its own self-adaptive learning rate. This not only helps with mitigating spectral bias, but also allows networks to quickly recover from poor initializations and reach better local minima. We demonstrate how random weight factorization can be leveraged to improve the training of neural representations on a variety of tasks, including image regression, shape representation, computed tomography, inverse rendering, solving partial differential equations, and learning operators between function spaces.