论文标题
通过相对熵最小化深度粗粒电位
Deep Coarse-grained Potentials via Relative Entropy Minimization
论文作者
论文摘要
神经网络(NN)电位是粗粒(CG)模型的自然选择。它们的多体能力可以高度准确地近似平均力的潜力,并有望以前所未有的精度进行CG模拟。 CG NN通过力匹配(FM)训练自下而上的电位受到有限的数据影响:它们依靠先前的潜力来进行训练数据域之外的物理良好预测,而相应的自由能表面对过渡区域的错误敏感。 FM的经典电位的标准替代方案是相对熵(RE)最小化,尚未应用于NN电位。在这项工作中,我们证明了液态水和丙氨酸二肽的基准问题表明,由于在训练过程中访问CG分布,RE训练更有效,从而提高了自由能表面并降低了对先前潜力的敏感性。此外,RE学会了纠正时间集成误差,从而使CG分子动力学模拟的时间步骤更大,同时保持准确性。因此,我们的发现支持FM以外的培训目标的使用,作为提高CG NN潜在准确性和可靠性的有希望的方向。
Neural network (NN) potentials are a natural choice for coarse-grained (CG) models. Their many-body capacity allows highly accurate approximations of the potential of mean force, promising CG simulations at unprecedented accuracy. CG NN potentials trained bottom-up via force matching (FM), however, suffer from finite data effects: They rely on prior potentials for physically sound predictions outside the training data domain and the corresponding free energy surface is sensitive to errors in transition regions. The standard alternative to FM for classical potentials is relative entropy (RE) minimization, which has not yet been applied to NN potentials. In this work, we demonstrate for benchmark problems of liquid water and alanine dipeptide that RE training is more data efficient due to accessing the CG distribution during training, resulting in improved free energy surfaces and reduced sensitivity to prior potentials. In addition, RE learns to correct time integration errors, allowing larger time steps in CG molecular dynamics simulation while maintaining accuracy. Thus, our findings support the use of training objectives beyond FM as a promising direction for improving CG NN potential accuracy and reliability.