论文标题
自动学习紧凑的质量意识替代物以进行优化问题
Automatically Learning Compact Quality-aware Surrogates for Optimization Problems
论文作者
论文摘要
解决未知参数的优化问题通常需要学习一个预测模型,以预测未知参数的值,然后使用这些值解决问题。最近的工作表明,在模型训练管道中将优化问题作为一层,从而预测了导致更高决策质量的未观察到的参数。不幸的是,此过程的计算成本很大,因为在每个训练迭代中必须解决优化问题和区分。此外,由于通过复杂的优化层训练时出现的非平滑度问题,有时也可能无法提高解决方案质量。为了解决这些缺点,我们通过用元变量来表示可行的空间来学习一个大优化问题的低维替代模型,每个替代空间都是原始变量的线性组合。通过训练低维替代模型的端到端,并与预测模型共同实现:i)训练和推理时间大大减少; ii)通过将注意力集中在更光滑的空间中的优化和学习中的更重要的变量上,从而提高了性能。从经验上讲,我们在非凸面对手建模任务,superodular推荐任务和凸投资组合优化任务上证明了这些改进。
Solving optimization problems with unknown parameters often requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values. Recent work has shown that including the optimization problem as a layer in the model training pipeline results in predictions of the unobserved parameters that lead to higher decision quality. Unfortunately, this process comes at a large computational cost because the optimization problem must be solved and differentiated through in each training iteration; furthermore, it may also sometimes fail to improve solution quality due to non-smoothness issues that arise when training through a complex optimization layer. To address these shortcomings, we learn a low-dimensional surrogate model of a large optimization problem by representing the feasible space in terms of meta-variables, each of which is a linear combination of the original variables. By training a low-dimensional surrogate model end-to-end, and jointly with the predictive model, we achieve: i) a large reduction in training and inference time; and ii) improved performance by focusing attention on the more important variables in the optimization and learning in a smoother space. Empirically, we demonstrate these improvements on a non-convex adversary modeling task, a submodular recommendation task and a convex portfolio optimization task.