论文标题

可证明有效的在线超参数优化了基于人群的土匪

Provably Efficient Online Hyperparameter Optimization with Population-Based Bandits

论文作者

Parker-Holder, Jack, Nguyen, Vu, Roberts, Stephen

论文摘要

机器学习中最近的许多胜利取决于调整良好的超参数。这在增强学习(RL)中尤为突出,在该学习中,配置的小变化可能导致失败。尽管调整超参数的重要性仍然很昂贵,并且通常以幼稚而费力的方式进行。最近解决此问题的一种解决方案是基于人群的培训(PBT),该培训在单个训练中更新了权重和超参数。已显示PBT在RL中特别有效,从而在现场广泛使用。但是,PBT缺乏理论保证,因为它依赖于随机启发式方法来探索超参数空间。这种效率低下意味着它通常需要大量的计算资源,这对于许多中小型实验室而言是过于刺激的。在这项工作中,我们介绍了第一个可证明有效的PBT式算法,基于人群的土匪(PB2)。 PB2使用概率模型以有效的方式指导搜索,从而可以发现高性能的高参数配置,其代理比PBT通常所要求的要少得多。我们在一系列RL实验中显示,PB2能够通过适度的计算预算实现高性能。

Many of the recent triumphs in machine learning are dependent on well-tuned hyperparameters. This is particularly prominent in reinforcement learning (RL) where a small change in the configuration can lead to failure. Despite the importance of tuning hyperparameters, it remains expensive and is often done in a naive and laborious way. A recent solution to this problem is Population Based Training (PBT) which updates both weights and hyperparameters in a single training run of a population of agents. PBT has been shown to be particularly effective in RL, leading to widespread use in the field. However, PBT lacks theoretical guarantees since it relies on random heuristics to explore the hyperparameter space. This inefficiency means it typically requires vast computational resources, which is prohibitive for many small and medium sized labs. In this work, we introduce the first provably efficient PBT-style algorithm, Population-Based Bandits (PB2). PB2 uses a probabilistic model to guide the search in an efficient way, making it possible to discover high performing hyperparameter configurations with far fewer agents than typically required by PBT. We show in a series of RL experiments that PB2 is able to achieve high performance with a modest computational budget.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源