论文标题

通过基于方案的设计调查生成AI对代码的解释性

Investigating Explainability of Generative AI for Code through Scenario-based Design

论文作者

Sun, Jiao, Liao, Q. Vera, Muller, Michael, Agarwal, Mayank, Houde, Stephanie, Talamadupula, Kartik, Weisz, Justin D.

论文摘要

生成的AI模型可以解释是什么意思?可解释的AI(XAI)的新兴纪律在帮助人们理解歧视模型方面取得了长足的进步。对产生人工制品而不是决策的生成模型的关注减少了。同时,生成的AI(Genai)技术正在成熟,并应用于软件工程等应用领域。使用基于方案的设计和问题驱动的XAI设计方法,我们在三种软件工程用例中探讨用户对Genai的解释性需求:自然语言代码,代码翻译和代码自动完成。我们与43位软件工程师进行了9次研讨会,其中使用了最先进的生成AI模型的真实示例来引起用户的解释性需求。从先前的工作中汲取灵感,我们还为Genai提出了4种XAI类型的代码功能,并从参与者那里收集了其他设计思想。我们的工作探讨了Genai对代码的解释性需求,并演示了以人为本的方法推动XAI在新领域中的技术发展。

What does it mean for a generative AI model to be explainable? The emergent discipline of explainable AI (XAI) has made great strides in helping people understand discriminative models. Less attention has been paid to generative models that produce artifacts, rather than decisions, as output. Meanwhile, generative AI (GenAI) technologies are maturing and being applied to application domains such as software engineering. Using scenario-based design and question-driven XAI design approaches, we explore users' explainability needs for GenAI in three software engineering use cases: natural language to code, code translation, and code auto-completion. We conducted 9 workshops with 43 software engineers in which real examples from state-of-the-art generative AI models were used to elicit users' explainability needs. Drawing from prior work, we also propose 4 types of XAI features for GenAI for code and gathered additional design ideas from participants. Our work explores explainability needs for GenAI for code and demonstrates how human-centered approaches can drive the technical development of XAI in novel domains.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源