论文标题
用于均值现场优化问题的熵虚拟播放
Entropic Fictitious Play for Mean Field Optimization Problem
论文作者
论文摘要
我们研究了平均野外限制的两层神经网络,其中神经元的数量趋于无穷大。在此制度中,对神经元参数的优化成为对概率度量的优化,并且通过添加熵正常器,该问题的最小化器被确定为固定点。我们提出了一种新颖的训练算法,名为“熵虚拟戏剧”,灵感来自于学习纳什平衡的游戏理论中的古典虚拟戏剧,以恢复这个固定点,并且该算法表现出两循环的迭代结构。本文证明了指数收敛性,我们还通过简单的数值示例来验证我们的理论结果。
We study two-layer neural networks in the mean field limit, where the number of neurons tends to infinity. In this regime, the optimization over the neuron parameters becomes the optimization over the probability measures, and by adding an entropic regularizer, the minimizer of the problem is identified as a fixed point. We propose a novel training algorithm named entropic fictitious play, inspired by the classical fictitious play in game theory for learning Nash equilibriums, to recover this fixed point, and the algorithm exhibits a two-loop iteration structure. Exponential convergence is proved in this paper and we also verify our theoretical results by simple numerical examples.