论文标题

大型语言模型是很少的测试仪:探索基于LLM的一般错误复制

Large Language Models are Few-shot Testers: Exploring LLM-based General Bug Reproduction

论文作者

Kang, Sungmin, Yoon, Juyeon, Yoo, Shin

论文摘要

已经开发了许多自动化测试技术来帮助开发人员进行写作测试。为了促进完全自动化,大多数现有技术旨在增加覆盖范围或产生探索性投入。但是,现有的测试生成技术在很大程度上无法实现更多的语义目标,例如生成测试以复制给定的错误报告。但是,复制错误仍然很重要,因为我们的实证研究表明,由于问题而导致的开源存储库中添加的测试数量约为相应项目测试套件大小的28%。同时,由于很难将错误报告中的预期程序语义转换为测试Oracles,因此现有的故障复制技术倾向于仅处理程序崩溃,这是所有错误报告的一小部分。为了从一般错误报告中自动化测试生成,我们提出了使用大型语言模型(LLM)的框架Libro,该框架已被证明能够执行与代码相关的任务。由于LLMS本身无法执行目标越野车代码,因此我们专注于后处理步骤,这些步骤有助于我们何时识别LLM有效,并根据其有效性对生产的测试进行排名。我们对Libro的评估表明,在经过广泛研究的缺陷4J基准中,Libro可以在所有研究病例中33%(750个中的251个)产生失败的测试用例,同时建议首先对149个错误进行错误重现测试。为了减轻数据污染,我们还针对LLM培训数据终止后提交的31个错误报告评估了Libro:Libro生成了32%研究的错误报告的错误重现测试。总体而言,我们的结果表明,Libro有可能通过自动从错误报告中生成测试来显着提高开发人员效率。

Many automated test generation techniques have been developed to aid developers with writing tests. To facilitate full automation, most existing techniques aim to either increase coverage, or generate exploratory inputs. However, existing test generation techniques largely fall short of achieving more semantic objectives, such as generating tests to reproduce a given bug report. Reproducing bugs is nonetheless important, as our empirical study shows that the number of tests added in open source repositories due to issues was about 28% of the corresponding project test suite size. Meanwhile, due to the difficulties of transforming the expected program semantics in bug reports into test oracles, existing failure reproduction techniques tend to deal exclusively with program crashes, a small subset of all bug reports. To automate test generation from general bug reports, we propose LIBRO, a framework that uses Large Language Models (LLMs), which have been shown to be capable of performing code-related tasks. Since LLMs themselves cannot execute the target buggy code, we focus on post-processing steps that help us discern when LLMs are effective, and rank the produced tests according to their validity. Our evaluation of LIBRO shows that, on the widely studied Defects4J benchmark, LIBRO can generate failure reproducing test cases for 33% of all studied cases (251 out of 750), while suggesting a bug reproducing test in first place for 149 bugs. To mitigate data contamination, we also evaluate LIBRO against 31 bug reports submitted after the collection of the LLM training data terminated: LIBRO produces bug reproducing tests for 32% of the studied bug reports. Overall, our results show LIBRO has the potential to significantly enhance developer efficiency by automatically generating tests from bug reports.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源