论文标题

Imagenet-1k的更好的普通VIT基准

Better plain ViT baselines for ImageNet-1k

论文作者

Beyer, Lucas, Zhai, Xiaohua, Kolesnikov, Alexander

论文摘要

人们普遍认为,视觉变压器模型需要复杂的正则化技术才能在Imagenet-1k量表数据上脱颖而出。令人惊讶的是,我们发现事实并非如此,并且标准数据增加就足够了。本说明对原始视觉变压器(VIT)香草训练设置进行了一些少量修改,这些设置极大地改善了普通VIT模型的性能。值得注意的是,在TPUV3-8的七个小时内,90个训练时期超过了76%的前1位准确性,类似于经典的RESNET50基线,而300个训练时代在不到一天的时间内达到80%。

It is commonly accepted that the Vision Transformer model requires sophisticated regularization techniques to excel at ImageNet-1k scale data. Surprisingly, we find this is not the case and standard data augmentation is sufficient. This note presents a few minor modifications to the original Vision Transformer (ViT) vanilla training setting that dramatically improve the performance of plain ViT models. Notably, 90 epochs of training surpass 76% top-1 accuracy in under seven hours on a TPUv3-8, similar to the classic ResNet50 baseline, and 300 epochs of training reach 80% in less than one day.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源