论文标题

通过适配器模块从自然语言到代码的跨模式转移

On The Cross-Modal Transfer from Natural Language to Code through Adapter Modules

论文作者

Goel, Divyam, Grover, Ramansh, Fard, Fatemeh H.

论文摘要

预训练的神经语言模型(PTLM),例如Codebert,最近在软件工程中用作大型源代码语料库中预培训的模型。他们的知识通过微调转移到下游任务(例如代码克隆检测)。在自然语言处理(NLP)中,通过使用适配器,紧凑的,参数有效的模块插入PTLM层中的其他替代方案。尽管已知适配器可以促进适应许多下游任务,而不是对需要重新调整所有模型的参数的模型(这归功于适配器的插件和播放性质,并且是参数效率的,因此没有探索它们在软件工程中的用法。 在这里,我们根据Hindle等人提出的自然假设探讨了知识转移。 al \ cite {hindle2016naturalnes}。因此,与Codexglue平台的基准相比,研究适配器的双峰性辅助测试和代码克隆检测任务的双峰性。这些适配器是使用编程语言培训的,并插入了由英语语料库(N-PTLM)进行培训的PTLM中。研究了三种编程语言,即C/C ++,Python和Java,以及有关适配器最佳设置的广泛实验。改善N-PTLM的结果证实了适配器在知识转移到软件工程方面的成功,有时与源代码培训的PTLM相当或超过了PTLM的结果;在参数数量,内存使用和推理时间方面更有效。我们的结果可以打开新的方向,以建立较小的模型,以实现更多的软件工程任务。我们为所有脚本和训练有素的适配器开放。

Pre-trained neural Language Models (PTLM), such as CodeBERT, are recently used in software engineering as models pre-trained on large source code corpora. Their knowledge is transferred to downstream tasks (e.g. code clone detection) via fine-tuning. In natural language processing (NLP), other alternatives for transferring the knowledge of PTLMs are explored through using adapters, compact, parameter efficient modules inserted in the layers of the PTLM. Although adapters are known to facilitate adapting to many downstream tasks compared to fine-tuning the model that require retraining all of the models' parameters -- which owes to the adapters' plug and play nature and being parameter efficient -- their usage in software engineering is not explored. Here, we explore the knowledge transfer using adapters and based on the Naturalness Hypothesis proposed by Hindle et. al \cite{hindle2016naturalness}. Thus, studying the bimodality of adapters for two tasks of cloze test and code clone detection, compared to their benchmarks from the CodeXGLUE platform. These adapters are trained using programming languages and are inserted in a PTLM that is pre-trained on English corpora (N-PTLM). Three programming languages, C/C++, Python, and Java, are studied along with extensive experiments on the best setup used for adapters. Improving the results of the N-PTLM confirms the success of the adapters in knowledge transfer to software engineering, which sometimes are in par with or exceed the results of a PTLM trained on source code; while being more efficient in terms of the number of parameters, memory usage, and inference time. Our results can open new directions to build smaller models for more software engineering tasks. We open source all the scripts and the trained adapters.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源