论文标题
关于建立高质量培训数据集用于神经代码搜索的重要性
On the Importance of Building High-quality Training Datasets for Neural Code Search
论文作者
论文摘要
神经代码搜索的性能受到神经模型得出的训练数据质量的显着影响。要求大量的高质量查询和代码对,以建立从自然语言到编程语言的精确映射。由于有限的可用性,大多数广泛使用的代码搜索数据集是通过妥协建立的,例如使用代码注释来替换查询。我们对著名代码搜索数据集的实证研究表明,其查询的三分之一以上的噪音使它们偏离了自然用户查询。在现实世界中应用时,通过嘈杂数据训练的模型将面临严重的性能降解。为了提高数据集质量并将其样本的查询与真实用户查询相同,对于神经代码搜索的实际可用性至关重要。在本文中,我们提出了一个由两个后续过滤器组成的数据清洁框架:基于规则的句法滤波器和基于模型的语义滤波器。这是将语义查询清洁应用于代码搜索数据集的第一个框架。在实验上,我们评估了框架对两个广泛使用的代码搜索模型和三个手动宣布的代码检索基准的有效性。使用我们框架过滤的数据集培训流行的DIEPCS模型,将其性能提高了19.2%的MRR和21.3%的答案@1,平均具有三个验证基准。
The performance of neural code search is significantly influenced by the quality of the training data from which the neural models are derived. A large corpus of high-quality query and code pairs is demanded to establish a precise mapping from the natural language to the programming language. Due to the limited availability, most widely-used code search datasets are established with compromise, such as using code comments as a replacement of queries. Our empirical study on a famous code search dataset reveals that over one-third of its queries contain noises that make them deviate from natural user queries. Models trained through noisy data are faced with severe performance degradation when applied in real-world scenarios. To improve the dataset quality and make the queries of its samples semantically identical to real user queries is critical for the practical usability of neural code search. In this paper, we propose a data cleaning framework consisting of two subsequent filters: a rule-based syntactic filter and a model-based semantic filter. This is the first framework that applies semantic query cleaning to code search datasets. Experimentally, we evaluated the effectiveness of our framework on two widely-used code search models and three manually-annotated code retrieval benchmarks. Training the popular DeepCS model with the filtered dataset from our framework improves its performance by 19.2% MRR and 21.3% Answer@1, on average with the three validation benchmarks.