论文标题

学会生成虚构的任务以改善元学习的概括

Learning to generate imaginary tasks for improving generalization in meta-learning

论文作者

Wu, Yichen, Huang, Long-Kai, Wei, Ying

论文摘要

元学习在现有基准上的成功取决于以下假设:元训练任务的分布涵盖元测试任务。经常违反任务不足或非常狭窄的元训练任务分布的应用中的假设会导致记忆或学习者过度拟合。最近的解决方案已追求元训练任务的增强,而同时产生正确和足够虚构任务的问题仍然是一个悬而未决的问题。在本文中,我们寻求一种方法,即通过任务上采样网络从任务表示从任务表示映射任务。此外,最终的方法将对抗性任务提高采样(ATU)足以生成可以通过最大化对抗性损失来最大程度地促进最新元学习者的任务。在几乎没有正弦的回归和图像分类数据集上,我们从经验上验证了ATU在元测试性能中的最新任务增强策略的明显改善以及上采样任务的质量。

The success of meta-learning on existing benchmarks is predicated on the assumption that the distribution of meta-training tasks covers meta-testing tasks. Frequent violation of the assumption in applications with either insufficient tasks or a very narrow meta-training task distribution leads to memorization or learner overfitting. Recent solutions have pursued augmentation of meta-training tasks, while it is still an open question to generate both correct and sufficiently imaginary tasks. In this paper, we seek an approach that up-samples meta-training tasks from the task representation via a task up-sampling network. Besides, the resulting approach named Adversarial Task Up-sampling (ATU) suffices to generate tasks that can maximally contribute to the latest meta-learner by maximizing an adversarial loss. On few-shot sine regression and image classification datasets, we empirically validate the marked improvement of ATU over state-of-the-art task augmentation strategies in the meta-testing performance and also the quality of up-sampled tasks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源