Note: if you are a cloud provider and believe this comparison is biased, please feel free to comment or to propose a pull request!
In this article, I’m using an open source project (https://github.com/quortex/cloudbench/) to build a cost/performance comparison of the instances available at GCP, AWS and Azure. Beware! This only takes VM resource into account and this is far from being enough for estimating the cost of a complete channel (ingress, egress, firewall, dns, routers, gateways, CDN, … play a significant role in the equation). That being said, one of the great thing about the cloud is that the comparison is made easier as all the prices are public: let’s see who offers the most pixels per dollar!
On-Demand, Preemptible, Committed
All the cloud providers have these 3 pricing models:
The “On-Demand” model is fairly simple: request a machine and pay by the amount of time you use it (1)
The “Committed” model can be slightly more complex, but in concept, you buy a given “compute capacity” for a given duration, and you get significant savings in exchange for that commitment:
You can choose to pay in advance for that capacity, in which case savings will be more important (2)
You can choose to commit for an overall capacity or even for a given instance; the less flexibility, the more savings
The “preemptible” model (a.k.a. “Spot” or “Low priority”) is usually the more cost-effective option: you request an instance, but it can be preempted at any time by the cloud provider … Unless your workflow was designed using cloud native technologies (guess what … that’s what we did at Quortex!), it’s not likely that one can use such machines for production environments.
For making that comparison, I made my best to be fair and not bias the comparison toward one given cloud provider, i.e.:
Do not take Google “Sustained usage” into account for comparing on-demand prices
For committed Usage, we used:
AWS EC2 Instance Saving Plans, without any upfront payment: https://aws.amazon.com/fr/savingsplans/pricing/
“Committed Use Discount” for GCP (no upfront payment): https://cloud.google.com/compute/docs/instances/signing-up-committed-use-discounts
Azure “Reserved VM Instances”, without any upfront (https://azure.microsoft.com/en-us/pricing/reserved-vm-instances/)
FFMPEG was used to bench the machines. This is of course totally biased towards video processing. We used to transcode simultaneously the profiles recommended by the Apple Authoring Specifications, and we also transcode files with a single profile.
Only 16 threads instances have been used for this test, but it’s absolutely trivial to change the tests for including other sizes. I don’t expect it to change the results below. I focused on what cloud providers call “Compute Optimized” machines. Again, the cloud providers have different strategies for sizing their instances and building a comparison is not straightforward. (3)
As one can see, there may not be a direct mapping between an instance name and a underlying hardware type. Requesting a “n1-highcpu-16” machine can mean either a 2012 Sandy Bridge Xeon at 3.2 GHz … or an 2017 Intel Xeon Scalable (1st generation). It’s worth noting that the CPU generation can be enforced, depending on the datacenter availability.
Lower is Better
There are lots to say on this chart alone. Let me just drop a few facts:
The Azure F16s is ahead of the pack by a quite significant amount. I have to say it’s been difficult to analyze this result, as the F16s is an “old generation” compute-optimized platform. Azure claims it’s running a 2673v3, that … does not exist on the Intel reference web site. Furthermore, it allegedly runs at 3.1 Ghz, which does not make it the highest CPU frequency in this comparison. Still, it outperforms other platforms running on Haswell by significant amounts (it’s 35% faster than the C4.4xlarge and 51% faster than the n1-highcpu-16#Haswell !). The reason for that is that the Fs serie does not use hyperthreading: you will have one real physical core for one vCPU, whereas all the other instances will give you one hyperthread per vCPU. It does make a big difference !
Although running on different platforms hardware, Google does a great job in aligning the n1 family performances (less than 1% of difference between a Sandy Bridge and a Broadwell version)
The performances of the “compute flagship” versions of AWS (c5.4xlarge), Azure (F16s2) and Google (c2-standard-16) are remarkably similar.
The AMD EPYC Rome CPU performance (propelling the n2d-highcpu-16) is quite stunning
Price / Performance tradeoff
To build a price comparison, let me introduce a new unit that I called the “Quortex” :). A Quortex is “the amount of dollars I have to spent to process a bunch of processing”. In other words, it’s the duration of this processing multiplied by the hourly price of the machine the code ran on. Although it has a limited meaning per se, this unit is very convenient for comparing the overall price/performance tradeoff.
Let’s start simple and see which instance offers the best performance/price ratio.
On Demand mQx
Lower is Better
Google leads the pack, and the n2d is a clear winner: it’s far, far ahead of its rivals and offers an unmatched price/performance ratio
The GCP e2 family also shines, but keep in mind that its level of performance is not guaranteed. With that in mind, it is a great compromise and a perfect choice for a wide variety of workflows.
The c5.4xlarge and the F16s_v2 do not only have the same performance, they are also priced the same … Hence, they share the same score.
They can be called Preemptible, Spot or Low Priority; in any case, they provide significant savings. For the machines used in that comparison, AWS/Azure/GCP savings are 69%, 76% and 80%, respectively. Again, the cloud providers do not have the same strategy and the price alone may not be the only criteria that you want to monitor: the preemption rate (which is not usually not disclosed) is likely to be an important factor … This will be the subject of another blog post.
Lower is Better
Using preemptible VMs tell a different story. The n2d remains a clear leader, but Azure instances are close second as their price drops by 80% from on-demand to preemptible.
AWS savings using Spot instances being less important than in Azure and GCP, they don’t score very well in that comparison. C5.4xlarge, for instance, score 64 mQx, while Azure F16s_v2 score a nice 36 mQx: it means that for the same price, you will almost get twice as much pixels with the F16s_v2 than with the c5.4xlarge!
The ARM Powered m6.4xlarge is clearly interesting. Although it’s not the best, it comes with a lot of memory and I’m really eager to test the c6g.4xlarge (the compute optimized version of the m6g, ie that comes with less memory).
Cloud providers really do their best to make commited usage comparison a very complex story. It’s interesting to note that AWS offer the best savings against on-demand (44% for 1 year commitment in average, 63% for 3 years), while Azure is 36%/59% and GCP 37%/55%. Let’s see what it means in terms of mQx.
1yr Commit mQx
Lower is Better
While n2d is still a clear winner, followed by the e2, the AWS c5.4xlarge comes close, and is ahead of the GCP n1 family. Azure F16s_v2 ranks close to the c5.4xlarge
The m6g.4x large score is impressive and is more interesting than many of its Intel rivals
3yr Commit mQx
Lower is Better
No surprise here, and the ranking is almost the same than with one year commit.
The n2d and the e2 still brings the same value.
The podium slightly changes as the F16s_v2 is now slightly ahead of the c5.4xlarge, but the difference is not really significant.
In a nutshell
Cloud providers all have different strategies to propose their instances, and making a comparison based on the available documentation is difficult (not to say impossible)
Depending on your scenario, you may have to select different cloud providers as the on-demand/preemptible/committed scenarios highlight different rankings
AMD EPYC CPU seem to be a great value
ARM CPUs are to be closely followed up, and the C5g.4xlarge (announced by AWS) is likely to be a very interesting proposal
Using preemptible instances is (by far) the best way to save money and avoid any type of vendor lock-in. Guess what? You can do this using Quortex software ;).
Marc Baillavoine, CEO
(1) Google also has a concept of “automatic sustained use discount” that can lower the on-demand price by up to 30% in case of sustained usage over a month.
(2) You can’t pay in advance for all the cloud providers.
(3) It’s worth noting that some machines have been included in this comparison to benchmark their cpu but may not be relevant given their memory amount (for instance, the “m” serie from AWS).