论文标题
卷积旁路是更好的视觉变压器适配器
Convolutional Bypasses Are Better Vision Transformer Adapters
论文作者
论文摘要
在计算机视觉中已广泛采用了预处理-1-1的范式。但是,随着视觉变压器(VIT)的尺寸呈指数增长,鉴于较重的存储空间的头顶,完整的登山变得越来越高。由参数有效传输学习(PETL)在语言变压器上的动机,最近的研究试图插入轻巧的适应模块(例如,适配器层或及时令牌)以预处理VIT,并且仅释放了这些模块,而预估计的权重则是冻结的。但是,这些模块最初是为了登录语言模型而提出的,并且没有考虑到专门针对视觉任务的先验知识。在本文中,我们建议在VIT中构建卷积旁路(交流)作为适应模块,仅引入了可训练参数的少量(少于模型参数的0.5%)以适应大型VIT。与其他PETL方法不同,卷积层的硬编码电感偏置的互惠受益,因此更适合视觉任务,尤其是在低数据表中。 VTAB-1K基准和少量学习数据集的实验结果表明,Convass的表现优于当前面向语言的适应模块,这表明有必要量身定制面向视觉的适应模块以适应视觉模型。
The pretrain-then-finetune paradigm has been widely adopted in computer vision. But as the size of Vision Transformer (ViT) grows exponentially, the full finetuning becomes prohibitive in view of the heavier storage overhead. Motivated by parameter-efficient transfer learning (PETL) on language transformers, recent studies attempt to insert lightweight adaptation modules (e.g., adapter layers or prompt tokens) to pretrained ViT and only finetune these modules while the pretrained weights are frozen. However, these modules were originally proposed to finetune language models and did not take into account the prior knowledge specifically for visual tasks. In this paper, we propose to construct Convolutional Bypasses (Convpass) in ViT as adaptation modules, introducing only a small amount (less than 0.5% of model parameters) of trainable parameters to adapt the large ViT. Different from other PETL methods, Convpass benefits from the hard-coded inductive bias of convolutional layers and thus is more suitable for visual tasks, especially in the low-data regime. Experimental results on VTAB-1K benchmark and few-shot learning datasets show that Convpass outperforms current language-oriented adaptation modules, demonstrating the necessity to tailor vision-oriented adaptation modules for adapting vision models.