论文标题

杂交第1/5-成功规则与Q学习,用于控制进化算法的突变率

Hybridizing the 1/5-th Success Rule with Q-Learning for Controlling the Mutation Rate of an Evolutionary Algorithm

论文作者

Buzdalova, Arina, Doerr, Carola, Rodionova, Anna

论文摘要

众所周知,只有在将其参数适当地调整为给定问题时,进化算法(EAS)才能实现峰值性能。更重要的是,最佳参数值在优化过程中可能会发生变化。参数控制机制是为识别和跟踪这些值而开发的技术。 最近,一系列严格的理论工作证实了几种参数控制技术的优越性,而不是最佳的静态参数。这些结果包括在优化ONEMAX问题时控制$(1+λ)$ 〜EA的突变率的示例。但是,在[Rodionova等,Gecco'19]中显示,这些技术的质量在很大程度上取决于后代人口大小$λ$。 我们在这项工作中介绍了一种新的混合参数控制技术,该技术将著名的一五个成功规则与Q学习结合在一起。我们证明,与先前的参数控制方法相比,我们的HQL机制与[Rodionova等,Gecco'19]中测试的所有技术相同或优越的性能 - 与所有后代人口量$λ$相反。我们还表明,HQL的有希望的表现不仅限于Onemax,而是扩展到其他几个基准问题。

It is well known that evolutionary algorithms (EAs) achieve peak performance only when their parameters are suitably tuned to the given problem. Even more, it is known that the best parameter values can change during the optimization process. Parameter control mechanisms are techniques developed to identify and to track these values. Recently, a series of rigorous theoretical works confirmed the superiority of several parameter control techniques over EAs with best possible static parameters. Among these results are examples for controlling the mutation rate of the $(1+λ)$~EA when optimizing the OneMax problem. However, it was shown in [Rodionova et al., GECCO'19] that the quality of these techniques strongly depends on the offspring population size $λ$. We introduce in this work a new hybrid parameter control technique, which combines the well-known one-fifth success rule with Q-learning. We demonstrate that our HQL mechanism achieves equal or superior performance to all techniques tested in [Rodionova et al., GECCO'19] and this -- in contrast to previous parameter control methods -- simultaneously for all offspring population sizes $λ$. We also show that the promising performance of HQL is not restricted to OneMax, but extends to several other benchmark problems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源