论文标题
在线3D垃圾箱包装增强解决方案与缓冲区
Online 3D Bin Packing Reinforcement Learning Solution with Buffer
论文作者
论文摘要
3D垃圾箱包装问题(3D-BPP)是行业中需求最高但最具挑战性的问题之一,在该问题中,代理必须将序列交付的可变尺寸项目填充到有限的箱中,以最大程度地利用空间利用率。它代表了一个强烈的NP-硬化优化问题,因此迄今为止没有提供空间利用率高性能的解决方案。在本文中,我们为3D-BPP解决方案提供了一个新的增强学习(RL)框架,以提高性能。首先,引入缓冲区以允许多项目作用选择。通过提高行动选择的自由程度,可以得出一项更复杂的政策,从而导致更好的包装绩效。其次,我们提出了一种不可知的数据增强策略,该策略利用了两个bin项目对称性以提高样品效率。第三,我们实施了一种基于模型的RL方法,该方法改编自流行算法Alphago,该方法在零和游戏中显示了超人性能。我们的适应能够在单人游戏和基于得分的环境中工作。尽管已知Alphago版本在计算上很重,但我们还是设法用单个线程和GPU训练所提出的框架,同时获得了胜过最先进的解决方案,从而导致空间利用率。
The 3D Bin Packing Problem (3D-BPP) is one of the most demanded yet challenging problems in industry, where an agent must pack variable size items delivered in sequence into a finite bin with the aim to maximize the space utilization. It represents a strongly NP-Hard optimization problem such that no solution has been offered to date with high performance in space utilization. In this paper, we present a new reinforcement learning (RL) framework for a 3D-BPP solution for improving performance. First, a buffer is introduced to allow multi-item action selection. By increasing the degree of freedom in action selection, a more complex policy that results in better packing performance can be derived. Second, we propose an agnostic data augmentation strategy that exploits both bin item symmetries for improving sample efficiency. Third, we implement a model-based RL method adapted from the popular algorithm AlphaGo, which has shown superhuman performance in zero-sum games. Our adaptation is capable of working in single-player and score based environments. In spite of the fact that AlphaGo versions are known to be computationally heavy, we manage to train the proposed framework with a single thread and GPU, while obtaining a solution that outperforms the state-of-the-art results in space utilization.