论文标题
表征商品无服务器计算平台
Characterizing Commodity Serverless Computing Platforms
论文作者
论文摘要
无服务器计算已成为云计算中的新趋势范式,使开发人员可以专注于核心应用程序逻辑的开发,并通过独立函数的组成快速构建原型。随着无服务器计算的开发和繁荣,主要的云供应商已连续推出了其商品无服务器计算平台。但是,这些平台的特征尚未系统地研究。测量这些特征可以帮助开发人员选择最适当的无服务器计算平台,并以正确的方式开发其基于无服务器的应用程序。为了填补这一知识差距,我们提出了一项有关表征主流商品无服务器计算平台的全面研究,包括AWS Lambda,Google Cloud Cloud Functions,Azure功能和Alibaba Cloud功能计算。具体而言,我们既进行定性分析又进行定量分析。在定性分析中,我们根据其正式文档来比较这些平台的三个方面(即开发,部署和运行时),以构建特征的分类法。在定量分析中,我们通过具有精心设计的基准测试的多个维度分析了这些平台的运行时性能。首先,我们分析了三个关键因素,这些因素可以影响基于无服务器的应用程序的启动潜伏期。其次,我们将不同平台的资源效率与16个代表性基准进行了比较。最后,我们在处理不同的并发请求时测量他们的性能差异,并以黑盒方式探索潜在原因。基于定性和定量分析的结果,我们得出了一系列发现,并为开发人员和云供应商提供了深刻的含义。
Serverless computing has become a new trending paradigm in cloud computing, allowing developers to focus on the development of core application logic and rapidly construct the prototype via the composition of independent functions. With the development and prosperity of serverless computing, major cloud vendors have successively rolled out their commodity serverless computing platforms. However, the characteristics of these platforms have not been systematically studied. Measuring these characteristics can help developers to select the most adequate serverless computing platform and develop their serverless-based applications in the right way. To fill this knowledge gap, we present a comprehensive study on characterizing mainstream commodity serverless computing platforms, including AWS Lambda, Google Cloud Functions, Azure Functions, and Alibaba Cloud Function Compute. Specifically, we conduct both qualitative analysis and quantitative analysis. In qualitative analysis, we compare these platforms from three aspects (i.e., development, deployment, and runtime) based on their official documentation to construct a taxonomy of characteristics. In quantitative analysis, we analyze the runtime performance of these platforms from multiple dimensions with well-designed benchmarks. First, we analyze three key factors that can influence the startup latency of serverless-based applications. Second, we compare the resource efficiency of different platforms with 16 representative benchmarks. Finally, we measure their performance difference when dealing with different concurrent requests, and explore the potential causes in a black-box fashion. Based on the results of both qualitative and quantitative analysis, we derive a series of findings and provide insightful implications for both developers and cloud vendors.