论文标题
随机网络蒸馏作为图像和文本生成的多样性度量
Random Network Distillation as a Diversity Metric for Both Image and Text Generation
论文作者
论文摘要
生成模型越来越能够产生高质量的图像和文本。社区开发了许多评估指标来比较生成模型。但是,这些指标不能有效地量化数据多样性。我们开发了一种新的多样性指标,可以很容易地应用于任何类型的合成和自然数据。我们的方法采用随机网络蒸馏,这是一种在增强学习中引入的技术。我们在图像和文本上验证和部署此指标。我们进一步探索了几种图像生成的多样性,这种设置以前很难评估。
Generative models are increasingly able to produce remarkably high quality images and text. The community has developed numerous evaluation metrics for comparing generative models. However, these metrics do not effectively quantify data diversity. We develop a new diversity metric that can readily be applied to data, both synthetic and natural, of any type. Our method employs random network distillation, a technique introduced in reinforcement learning. We validate and deploy this metric on both images and text. We further explore diversity in few-shot image generation, a setting which was previously difficult to evaluate.