论文标题

重新访问LP受限的软磁性损失:一项全面研究

Revisiting lp-constrained Softmax Loss: A Comprehensive Study

论文作者

Trivedi, Chintan, Makantasis, Konstantinos, Liapis, Antonios, Yannakakis, Georgios N.

论文摘要

归一化是任何机器学习任务的重要过程,因为它控制数据的属性并影响了整个模型性能。然而,迄今为止,特定形式的正常化形式的影响已经在有限的特定领域分类任务中,而不是以一般方式进行了研究。由于缺乏这样的全面研究,在本文中,我们研究了LP限制的软性损失分类器在不同规范订单,幅度和数据维度之间的性能,概念概念验证分类问题和现实世界中流行的图像分类任务的性能。实验结果总共表明,LP受限的软磁损耗分类器不仅可以实现更准确的分类结果,而且同时似乎不太容易过度拟合。在测试的三个流行深度学习体系结构和八个研究集中的三个流行深度学习体系结构中,核心发现始终研究,并建议LP归一化是在性能和​​融合方面的图像分类的推荐数据表示实践,并且不适合过度拟合。

Normalization is a vital process for any machine learning task as it controls the properties of data and affects model performance at large. The impact of particular forms of normalization, however, has so far been investigated in limited domain-specific classification tasks and not in a general fashion. Motivated by the lack of such a comprehensive study, in this paper we investigate the performance of lp-constrained softmax loss classifiers across different norm orders, magnitudes, and data dimensions in both proof-of-concept classification problems and real-world popular image classification tasks. Experimental results suggest collectively that lp-constrained softmax loss classifiers not only can achieve more accurate classification results but, at the same time, appear to be less prone to overfitting. The core findings hold across the three popular deep learning architectures tested and eight datasets examined, and suggest that lp normalization is a recommended data representation practice for image classification in terms of performance and convergence, and against overfitting.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源