论文标题
复杂性控制的生成对抗网络
Complexity Controlled Generative Adversarial Networks
论文作者
论文摘要
训练生成的对抗网(GAN)及其变体所面临的问题之一是模式崩溃的问题,其中训练稳定性在使用更多培训数据的情况下,生成损失的增加。在本文中,我们通过低复杂性神经网络(LCNN)提出了替代体系结构,该架构试图以低复杂性学习模型。动机是控制模型复杂性会导致模型不会过分拟合培训数据。我们结合了gan,深卷积gan(dcgans)和光谱归一化甘人(SNGAN)的LCNN损失函数,以开发称为LCNN-GAN,LCNN-DCGAN和LCNN-SNGAN的混合体系结构。在各种大型基准图像数据集上,我们表明我们提出的模型的使用会导致稳定的训练,同时避免了模式崩溃的问题,从而带来了更好的训练稳定性。我们还展示了如何通过LCNN功能中的超参数控制学习行为,这也提供了提高的成立评分。
One of the issues faced in training Generative Adversarial Nets (GANs) and their variants is the problem of mode collapse, wherein the training stability in terms of the generative loss increases as more training data is used. In this paper, we propose an alternative architecture via the Low-Complexity Neural Network (LCNN), which attempts to learn models with low complexity. The motivation is that controlling model complexity leads to models that do not overfit the training data. We incorporate the LCNN loss function for GANs, Deep Convolutional GANs (DCGANs) and Spectral Normalized GANs (SNGANs), in order to develop hybrid architectures called the LCNN-GAN, LCNN-DCGAN and LCNN-SNGAN respectively. On various large benchmark image datasets, we show that the use of our proposed models results in stable training while avoiding the problem of mode collapse, resulting in better training stability. We also show how the learning behavior can be controlled by a hyperparameter in the LCNN functional, which also provides an improved inception score.