论文标题
重新初始化何时起作用?
When Does Re-initialization Work?
论文作者
论文摘要
已经观察到在训练期间重新定位神经网络,以改善最近的作品中的概括。然而,它既不在深度学习实践中被广泛采用,也不经常用于最先进的培训方案中。这就提出了一个问题,即重新定位何时起作用,以及是否应将其与正规化技术一起使用,例如数据增强,体重衰减和学习率计划。在这项工作中,我们对标准培训的经验比较进行了广泛的经验比较,并选择了一些重新定位方法来回答这个问题,并在各种图像分类基准上训练了15,000多个模型。我们首先确定在没有任何其他正则化的情况下,这种方法对概括始终有益。但是,当与其他经过精心调整的正则化技术一起部署时,重新定位方法几乎没有额外的收益,尽管最佳的概括性能对学习率和体重衰减超参数的选择不太敏感。为了研究重新定位方法对嘈杂数据的影响,我们还考虑在标签噪声下学习。令人惊讶的是,在这种情况下,即使存在其他经过精心调整的正则化技术,重新定位也会显着改善标准培训。
Re-initializing a neural network during training has been observed to improve generalization in recent works. Yet it is neither widely adopted in deep learning practice nor is it often used in state-of-the-art training protocols. This raises the question of when re-initialization works, and whether it should be used together with regularization techniques such as data augmentation, weight decay and learning rate schedules. In this work, we conduct an extensive empirical comparison of standard training with a selection of re-initialization methods to answer this question, training over 15,000 models on a variety of image classification benchmarks. We first establish that such methods are consistently beneficial for generalization in the absence of any other regularization. However, when deployed alongside other carefully tuned regularization techniques, re-initialization methods offer little to no added benefit for generalization, although optimal generalization performance becomes less sensitive to the choice of learning rate and weight decay hyperparameters. To investigate the impact of re-initialization methods on noisy data, we also consider learning under label noise. Surprisingly, in this case, re-initialization significantly improves upon standard training, even in the presence of other carefully tuned regularization techniques.