论文标题
概括和记忆:偏差潜在模型
Generalization and Memorization: The Bias Potential Model
论文作者
论文摘要
学习概率分布的模型,例如生成模型和密度估计器的行为与学习功能的模型截然不同。在记忆现象中找到了一个例子,即在生成对抗网络(GAN)中发生的最终收敛到经验分布。因此,概括问题比监督学习更为微妙。对于偏差潜在模型,我们表明,如果采用早期停止,则可以实现独立于维度的概括精度,尽管从长远来看,该模型可以记住样品或分歧。
Models for learning probability distributions such as generative models and density estimators behave quite differently from models for learning functions. One example is found in the memorization phenomenon, namely the ultimate convergence to the empirical distribution, that occurs in generative adversarial networks (GANs). For this reason, the issue of generalization is more subtle than that for supervised learning. For the bias potential model, we show that dimension-independent generalization accuracy is achievable if early stopping is adopted, despite that in the long term, the model either memorizes the samples or diverges.