论文标题
最佳的鲁棒性一致性权衡,用于在线学习算法
Optimal Robustness-Consistency Trade-offs for Learning-Augmented Online Algorithms
论文作者
论文摘要
我们研究通过合并机器学习预测来改善在线算法的性能的问题。目的是设计既一致又健壮的算法,这意味着当预测准确并保持最差的保证时,该算法的性能很好。由于Lykouris和Vassilvitskii(ICML '18)和Purohit等人(Neurips '18),已经在最近的一系列作品中研究了此类算法。他们为各种在线问题提供了稳健的一致性权衡。但是,他们打开了一个问题,即这些权衡是否紧张,即需要在多大程度上进行这种权衡。在本文中,我们使用机器学习预测提供了第一组非平凡的下限,以进行竞争分析。我们专注于滑雪租赁和非熟练计划的经典问题,并在各种环境中提供最佳的权衡。
We study the problem of improving the performance of online algorithms by incorporating machine-learned predictions. The goal is to design algorithms that are both consistent and robust, meaning that the algorithm performs well when predictions are accurate and maintains worst-case guarantees. Such algorithms have been studied in a recent line of works due to Lykouris and Vassilvitskii (ICML '18) and Purohit et al (NeurIPS '18). They provide robustness-consistency trade-offs for a variety of online problems. However, they leave open the question of whether these trade-offs are tight, i.e., to what extent to such trade-offs are necessary. In this paper, we provide the first set of non-trivial lower bounds for competitive analysis using machine-learned predictions. We focus on the classic problems of ski-rental and non-clairvoyant scheduling and provide optimal trade-offs in various settings.