Most organisations who have adopted #containers tend to use this in conjunction with a managed kubernetes service typically from one of the major cloud platform providers (#GKE, #AKS, #Azure, #EKS, #OKE, and #ACK).
Whilst there are many benefits adopting Containers, the cost savings are largely dependent on how highly utilised the managed clusters are.
This article discusses how serverless containers could help reduce your cloud spend further especially where your have a variable workload.
About Ajit Gupta
Senior Technology Architect with over 20 years of complex delivery experience, focused on mutually exploring solutions that truly meet client needs.
For any queries or feedback you may have regarding this article or to discuss other architecture challenges, please contact me at ajit@midships.io
Serverless Containers are where the cloud vendor generates the exact amount of resources required to run a workload on the fly.
In a traditional containerised architecture, clusters tend to be over provisioned to allow for both vertical and horizontal scaling in order to accommodate peak workloads. As a result, when operating below peak you are paying for resources that are not being utilised. In comparison with serverless containers you only pay for what is used as illustrated below.
Consider an ecommerce solution where you experience spikes during promotions, instead of provisioning sufficient cluster resources to allow for autoscaling (in order to accommodate peaks), you can instead with serverless, provision additional container resources or spawn new instances as and when required.
Let's review a real example to better understand the example of the cost difference between services. For a basic production #ForgeRock containerised stack, we typically recommend the following:
2 node managed cluster
Each node with 12 CPU & 24GB RAM
Each node will then run the following:
We have assumed that in practice we will run at peak for 15% of the time (approx 3.6 hours per day).
Our overprovisioning is 2 vCPU & 7GB on each cluster node so that we have sufficient capacity to run other containerised services (e.g. side cars); enable limited horizontal scaling; and, undertake rolling updates.
However for the purposes of this comparison we will also compare the cost of running a cluster with the minimum resources required to support only vertical scaling.
The approximate monthly cost for running a 2 node cluster is as follows:
Whereas on Serverless Containers it will be:
Even if we compare the minimum cluster size after applying a 30% discount to the cluster cost, there is a potential cost saving of up to 64%. GCP is the most expensive of the major cloud providers. This is partly due to their offer of a free tier.
Other benefits
Going serverless doesn't just have a cost implication, but can help deliver other benefits which include:
Can lead to improved Security & Alignment with Standards as developers must comply with serverless constructs.
Reduced server management and simplified scalability management.
Quicker deployments and updated (particularly with respect to canary and rolling updates).
Greater focus on your product as opposed to maintenance.
Also worth noting the Availability SLAs for Serverless Containers is as follows:
GCP Cloud Run - 99.9%
AWS Fargate - 99.99%
Azure ACI - 99.95%
AliCloud ECI - 99.99%
At Midships, we see Serverless Containers becoming the norm over the coming couple of years and for many will be a first step towards the next evolution of serverless cloud computing.
To learn more about Midships and how we can help you on your cloud journey, please feel free to reach out to me at ajit@midships.io or setup a free architecture discussion here
Useful Links
Comments