论文标题
关于标准化亚级别方法的鲁棒性,并随机损坏的亚级别
On Robustness of the Normalized Subgradient Method with Randomly Corrupted Subgradients
论文作者
论文摘要
许多现代化的优化和机器学习算法依赖于值得信赖的亚级别信息,因此,当这些信息被损坏时,它们可能不会收敛。在本文中,我们考虑了可能被任意损坏(具有给定概率)并研究标准化亚级别方法的鲁棒性特性的设置。在概率腐败方案下,我们证明了标准化的亚级别方法,其更新仅依赖于亚级别的定向信息,收敛到最小化器,以供凸,强烈凸出和弱小的凸凸功能,从而满足了某些条件。线性回归和逻辑分类问题的数值证据支持我们的结果。
Numerous modern optimization and machine learning algorithms rely on subgradient information being trustworthy and hence, they may fail to converge when such information is corrupted. In this paper, we consider the setting where subgradient information may be arbitrarily corrupted (with a given probability) and study the robustness properties of the normalized subgradient method. Under the probabilistic corruption scenario, we prove that the normalized subgradient method, whose updates rely solely on directional information of the subgradient, converges to a minimizer for convex, strongly convex, and weakly-pseudo convex functions satisfying certain conditions. Numerical evidence on linear regression and logistic classification problems support our results.