论文标题
POSET-RL:使用加固学习优化尺寸和执行时间的阶段订购
POSET-RL: Phase ordering for Optimizing Size and Execution Time using Reinforcement Learning
论文作者
论文摘要
几种应用程序的内存需求不断提高导致需求增加,而嵌入式设备可能无法满足。在这种情况下,限制内存的使用至关重要。重要的是,这种代码大小改进不应对运行时产生负面影响。在优化代码大小的同时改善执行时间是一项不平凡但重要的任务。现代编译器中标准优化序列的排序是固定的,并且是由编译器域专家根据其专业知识而启发的。但是,此顺序是最佳的,并且在所有情况下都不能很好地概括。我们提出了基于加强学习的解决方案,解决了相位订购问题,其中排序既可以改善执行时间和代码大小。我们提出了两种不同的方法来对序列进行建模:一种通过手动排序,另一个基于一个称为OZ依赖图(ODG)的图形。我们的方法将最小数据作为培训集,并与LLVM集成。我们在Spec-CPU 2006,Spec-CPU 2017和Mibench的基准上显示了X86和AARCH64架构的结果。我们观察到,基于ODG的提议模型在SPEC 2017基准中平均以大小和执行时间为单位的OZ序列均优于6.19%和11.99%。
The ever increasing memory requirements of several applications has led to increased demands which might not be met by embedded devices. Constraining the usage of memory in such cases is of paramount importance. It is important that such code size improvements should not have a negative impact on the runtime. Improving the execution time while optimizing for code size is a non-trivial but a significant task. The ordering of standard optimization sequences in modern compilers is fixed, and are heuristically created by the compiler domain experts based on their expertise. However, this ordering is sub-optimal, and does not generalize well across all the cases. We present a reinforcement learning based solution to the phase ordering problem, where the ordering improves both the execution time and code size. We propose two different approaches to model the sequences: one by manual ordering, and other based on a graph called Oz Dependence Graph (ODG). Our approach uses minimal data as training set, and is integrated with LLVM. We show results on x86 and AArch64 architectures on the benchmarks from SPEC-CPU 2006, SPEC-CPU 2017 and MiBench. We observe that the proposed model based on ODG outperforms the current Oz sequence both in terms of size and execution time by 6.19% and 11.99% in SPEC 2017 benchmarks, on an average.