论文标题

用于计算学习和反转的广义加权优化方法

A Generalized Weighted Optimization Method for Computational Learning and Inversion

论文作者

Engquist, Björn, Ren, Kui, Yang, Yunan

论文摘要

各种机器学习模型的概括能力在不足和过度参数化的制度中表现出不同的现象。在本文中,我们专注于回归模型,例如特征回归和内核回归,并分析了使用嘈杂数据进行计算学习和反转的广义加权最小二乘优化方法。提出的框架的亮点是,我们允许在参数空间和数据空间中加权。加权方案既编码有关要学习的对象的先验知识,又要对损失函数中不同数据点的贡献的策略进行策略。在这里,我们表征了加权方案对学习方法的概括误差的影响,在该方法中,我们在不足和过度参数化的制度中为随机傅立叶特征模型得出了显式的概括误差。对于更通用的特征图,基于特征矩阵的单数值提供了误差界。我们证明,从先验知识中进行适当的权重可以提高学习模型的概括能力。

The generalization capacity of various machine learning models exhibits different phenomena in the under- and over-parameterized regimes. In this paper, we focus on regression models such as feature regression and kernel regression and analyze a generalized weighted least-squares optimization method for computational learning and inversion with noisy data. The highlight of the proposed framework is that we allow weighting in both the parameter space and the data space. The weighting scheme encodes both a priori knowledge on the object to be learned and a strategy to weight the contribution of different data points in the loss function. Here, we characterize the impact of the weighting scheme on the generalization error of the learning method, where we derive explicit generalization errors for the random Fourier feature model in both the under- and over-parameterized regimes. For more general feature maps, error bounds are provided based on the singular values of the feature matrix. We demonstrate that appropriate weighting from prior knowledge can improve the generalization capability of the learned model.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源