论文标题
加权元学习
Weighted Meta-Learning
论文作者
论文摘要
元学习利用相关的源任务来学习一个初始化,可以快速对目标任务进行微调,并以有限的标记示例进行。但是,许多流行的元学习算法,例如模型不合时宜的元学习(MAML),仅假设访问目标样本进行微调。在这项工作中,我们基于加权不同源任务的损失而为元学习提供了一个通用框架,在该源允许权重依赖目标样本的情况下。在这种一般环境中,我们就源任务的加权经验风险和预期目标风险的距离提供了上限,该范围是积分概率度量(IPM)和Rademacher复杂性的上限,这些度量适用于许多元学习设置,包括MAML和加权MAML变体。然后,我们基于最大程度地减少相对于经验IPM的误差(包括加权MAML算法,$α$ -MAML)而开发的学习算法。最后,我们在几个回归问题上证明了我们加权的元学习算法比统一加权的元学习算法(例如MAML)可以找到更好的初始化。
Meta-learning leverages related source tasks to learn an initialization that can be quickly fine-tuned to a target task with limited labeled examples. However, many popular meta-learning algorithms, such as model-agnostic meta-learning (MAML), only assume access to the target samples for fine-tuning. In this work, we provide a general framework for meta-learning based on weighting the loss of different source tasks, where the weights are allowed to depend on the target samples. In this general setting, we provide upper bounds on the distance of the weighted empirical risk of the source tasks and expected target risk in terms of an integral probability metric (IPM) and Rademacher complexity, which apply to a number of meta-learning settings including MAML and a weighted MAML variant. We then develop a learning algorithm based on minimizing the error bound with respect to an empirical IPM, including a weighted MAML algorithm, $α$-MAML. Finally, we demonstrate empirically on several regression problems that our weighted meta-learning algorithm is able to find better initializations than uniformly-weighted meta-learning algorithms, such as MAML.