In this document, we present an interative interface for our performance model for Serverless Computing Platforms. You can take a look at our Github repository for the source code and other artifacts, or check out our publication (coming soon) for more details and experimentations. Here are a few of the proposed model characteristics:
Our performance model can handle most serverless computing platforms. This includes most mainstream public serverless computing platforms like AWS Lambda, Google Cloud Functions, Azure Functions, and IBM Cloud Functions. which shows the applicability of the proposed model.
The proposed model solves the continuous-time Semi-Markov Process (SMP) to calculate the steady-state characteristics of the system. This creates the ability to analyze the system in the long run without the need to perform long and expensive load testing on the actual system. All you need is to do is measure the cold start and warm start performance of your function.
Our performance model can handle all scale-per-request serverless computing platforms which tend to perform autoscaling tasks even for a single request. Classical performance models cannot handle the dynamic environment of the modern serverless computing platform. This work is the first model that handles the complexities that arise in the latest paradigm of cloud computing services.
The proposed model can be leveraged to perform what-if analysis and capacity planning for serverless providers. This helps them calculate optimal configurations for each individual workload and provide better services with stronger Quality of Service (QoS) guaranties and better cost-performance tradeoffs.
Our interactive model helps researchers and serverless computing platform
users predict the behaviour of their applications in the long run.
The inputs for our model can be classified into two sections: System Properties and Workload Properties.