论文标题
计算机视觉中的算法进度
Algorithmic progress in computer vision
论文作者
论文摘要
我们研究了Imagenet上图像分类的算法进度,这可能是计算机视觉最著名的测试床。我们估计了一个模型,该模型是通过神经缩放定律的工作所启示的,并将进度分解为计算,数据和算法的缩放。使用沙普利值来归因性能改进,我们发现算法改进大致与进度计算机视觉的计算缩放大致重要。我们的估计表明,算法创新主要采用计算算法进步的形式(这使得研究人员能够从较少的计算中获得更好的性能),而不是提高数据提高算法的进步。我们发现,计算说明算法的进步的速度是通常与摩尔定律相关的速度的两倍以上。特别是,我们估计每九个月计算提取创新的一半计算要求(95 \%置信区间:4到25个月)。
We investigate algorithmic progress in image classification on ImageNet, perhaps the most well-known test bed for computer vision. We estimate a model, informed by work on neural scaling laws, and infer a decomposition of progress into the scaling of compute, data, and algorithms. Using Shapley values to attribute performance improvements, we find that algorithmic improvements have been roughly as important as the scaling of compute for progress computer vision. Our estimates indicate that algorithmic innovations mostly take the form of compute-augmenting algorithmic advances (which enable researchers to get better performance from less compute), not data-augmenting algorithmic advances. We find that compute-augmenting algorithmic advances are made at a pace more than twice as fast as the rate usually associated with Moore's law. In particular, we estimate that compute-augmenting innovations halve compute requirements every nine months (95\% confidence interval: 4 to 25 months).