论文标题
Profet:基于分析的CNN培训延迟先知,用于GPU云实例
PROFET: Profiling-based CNN Training Latency Prophet for GPU Cloud Instances
论文作者
论文摘要
培训卷积神经网络(CNN)模型通常需要重要的计算能力,并且云计算资源被广泛用作训练环境。但是,CNN算法开发人员很难跟上系统更新,并由于迅速发展的云服务而将其应用于培训环境。因此,对于云计算服务供应商而言,为各种培训任务设计和提供最佳的培训环境非常重要,以减少算法开发人员的系统操作管理开销。为了实现目标,我们提出了PROFET,可以预测各种图形处理单元(GPU)设备上任意CNN实施的训练潜伏期,以开发一种经济高效且耗时的培训云环境。与以前的培训延迟预测工作不同,PROFET不依赖CNN体系结构的实现细节,并且适合在公共云环境中使用。与最新的相关工作相比,详尽的评估揭示了Profet的卓越预测准确性,并且示范服务提出了拟议系统的实用性。
Training a Convolutional Neural Network (CNN) model typically requires significant computing power, and cloud computing resources are widely used as a training environment. However, it is difficult for CNN algorithm developers to keep up with system updates and apply them to their training environment due to quickly evolving cloud services. Thus, it is important for cloud computing service vendors to design and deliver an optimal training environment for various training tasks to lessen system operation management overhead of algorithm developers. To achieve the goal, we propose PROFET, which can predict the training latency of arbitrary CNN implementation on various Graphical Processing Unit (GPU) devices to develop a cost-effective and time-efficient training cloud environment. Different from the previous training latency prediction work, PROFET does not rely on the implementation details of the CNN architecture, and it is suitable for use in a public cloud environment. Thorough evaluations reveal the superior prediction accuracy of PROFET compared to the state-of-the-art related work, and the demonstration service presents the practicality of the proposed system.