论文标题
使用机器学习的迭代数值方法的原则加速
Principled Acceleration of Iterative Numerical Methods Using Machine Learning
论文作者
论文摘要
迭代方法在大规模的科学计算应用中无处不在,最近已经提出了许多基于元学习的方法来加速它们。但是,缺乏对这些方法的系统研究以及它们与元学习的不同。在本文中,我们提出了一个框架来分析此类基于学习的加速方法,在该方法中,人们可以立即确定与经典的元学习不同。我们表明,这种出发可能导致模型性能的任意恶化。基于我们的分析,我们介绍了一种新颖的培训方法,用于基于学习的迭代方法加速。此外,从理论上讲,我们证明了所提出的方法可以改善现有方法,并通过各种数值应用来证明其重要的优势和多功能性。
Iterative methods are ubiquitous in large-scale scientific computing applications, and a number of approaches based on meta-learning have been recently proposed to accelerate them. However, a systematic study of these approaches and how they differ from meta-learning is lacking. In this paper, we propose a framework to analyze such learning-based acceleration approaches, where one can immediately identify a departure from classical meta-learning. We show that this departure may lead to arbitrary deterioration of model performance. Based on our analysis, we introduce a novel training method for learning-based acceleration of iterative methods. Furthermore, we theoretically prove that the proposed method improves upon the existing methods, and demonstrate its significant advantage and versatility through various numerical applications.