论文标题

随机子空间中具有收敛性和预期复杂性分析的随机信任区域算法

Stochastic trust-region algorithm in random subspaces with convergence and expected complexity analyses

论文作者

Dzahini, Kwassi Joseph, Wild, Stefan M.

论文摘要

这项工作提出了一个大规模随机导数优化(DFO)的框架,这是一种基于随机子空间中迭代最小化的信任区域方法。该框架既是使用随机模型(Storm)进行随机优化算法的算法和理论扩展。此外,恒星通过最大程度地限制插值模型来实现可伸缩性,从而近似低维仿射子空间中的目标,从而显着降低了功能评估的每卷成本,并在大规模的随机DFO问题上产生强大的性能。这些子空间的用户确定的维度(例如,通过所谓的Johnson-Lindenstrauss变换的列定义了后者时),事实证明与问题的维度无关。出于收敛目的,需要特定的子空间质量和随机函数估计的精度和模型的准确性,以足够高但固定的概率保持。使用Martingale理论在后一个假设下,显示了恒星几乎确定的全球融合到一阶固定点,并证明达到所需的一阶准确度所需的预期迭代次数与Storm和其他随机DFO算法相似,直到常数。

This work proposes a framework for large-scale stochastic derivative-free optimization (DFO) by introducing STARS, a trust-region method based on iterative minimization in random subspaces. This framework is both an algorithmic and theoretical extension of an algorithm for stochastic optimization with random models (STORM). Moreover, STARS achieves scalability by minimizing interpolation models that approximate the objective in low-dimensional affine subspaces, thus significantly reducing per-iteration costs in terms of function evaluations and yielding strong performance on large-scale stochastic DFO problems. The user-determined dimension of these subspaces, when the latter are defined, for example, by the columns of so-called Johnson--Lindenstrauss transforms, turns out to be independent of the dimension of the problem. For convergence purposes, both a particular quality of the subspace and the accuracies of random function estimates and models are required to hold with sufficiently high, but fixed, probabilities. Using martingale theory under the latter assumptions, an almost sure global convergence of STARS to a first-order stationary point is shown, and the expected number of iterations required to reach a desired first-order accuracy is proved to be similar to that of STORM and other stochastic DFO algorithms, up to constants.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源