GCP vs. AWS

Using containers to simplify the deployment of complex applications is more than just a trend these days. Many new apps are being developed and released as microservices running in individual containers. In fact, containerization is now one of the primary ways to leverage the native advantages of cloud computing platforms.

Google Cloud Platform (GCP) and Amazon Web Services (AWS) both support containerization through their fully managed container offerings; GKE and EKS. Leveraging these services allows devs to rapidly deploy a robust cloud architecture, without sacrificing efficiency and cost-effectiveness.

The real challenge is choosing the service that best fits your business-critical needs, especially when there are other competitive offerings by Microsoft and many more companies all having jumped on the Kubernetes-as-a-service bandwagon. 

At Cherre, we discussed the GCP vs. AWS scenario in length and considered all the advantages and disadvantages that both platforms offer. In this article, we compare the two options for Kubernetes deployment and elastic container architecture. We’ll also outline the decision-making process for our in-house choice. 

GKE vs. EKS – Agility

Let’s start by recognizing the fact that Google was the company originally behind Kubernetes and its inception. Since version 1.0 was released in 2015—also when Google and the Linux Foundation formed the Cloud Native Computing Foundation for Kube as its flagship project—developers have extensively used the platform. It was Google who first introduced the Kubernetes architecture for running containers and managing clusters of cloud servers, so it is only natural that Google provides an advantage in offering Kubernetes specific features.

GKE, Google’s service for running Kubernetes, is also considered more agile than EKS, which is Amazon’s answer to the same issue. It is much easier to spin up an entire cluster running Kubernetes when you are on GKE as opposed to when you use EKS. Native Kubernetes commands and features are supported out of the box.

GKE also delivers some native modules and add-ons that extend the functionality of Kubernetes. From personal experience, spinning up clusters in GKE is quicker than in AKS. 

Once again, these are all because of Google’s understanding of Kubernetes itself. Since the developers behind Kubernetes work closely with developers building GKE as a service, it is not surprising to find GKE to be faster, smoother, and more reliable than EKS when it comes to maintaining Kubernetes clusters, especially complex, resource-intensive ones.

GCP vs. AWS – Pricing Comparison for Kubernetes

Next, we’ll consider the cost comparison between GCP and AWS. The cost models of using GKE and EKS are similar to other cloud services from the two companies. You can run Kubernetes clusters on a pay-as-you-go business model, which means you only pay for the resources you use for as long as you use them. An hourly rate is used as a way to calculate the cost of running your clusters.

Google Cloud Platform Pricing Calculator, 2020

However, GKE and EKS incorporate different methods for calculating operational costs. An 8-node cluster that runs for roughly 100 hours every month costs $38 on GCP using an n1-standard-1 instance to support the cluster—as per the image quote above.

As of January 2020 and Amazon’s recent price drop announcement for EKS, a cluster running on Amazon’s EKS will cost you a total of $72 per month at 0.10 cents an hour just for the control plane. That’s a significant difference indeed.

It is worth noting that this is the price necessary to operate the cluster. The cost is not based on the number of workers contained in the cluster. You still have to pay for the computation costs—either EC2 instance hours or Fargate compute resources on top of the cluster cost—in order for the cluster to be useful. The recent Amazon cost reduction applies to the “overhead” cost of maintaining the cluster, which covers the platform’s expense for the cluster management.

For smaller applications, the difference is not too significant. You can offset the cost difference if you fully utilize Amazon’s massive cloud ecosystem, especially if you take advantage of native tools like RDS and CloudWatch. AWS also comes with a handful of free features to utilize. However, in the long run (and for more complex applications), GKE is the more cost-effective cloud service to use.

GKE vs. EKS – Deep Tech

Containerization is not only capable of supporting everyday apps and business solutions. Today, the same containerization technology is being used to power deep tech innovations including big data, artificial intelligence, and deep learning. These specific applications require more computing power and in certain instances, specific hardware such as GPUs.

This is where GKE really shines in comparison to EKS. Amazon may be rolling out GPU support really soon, but Google is already ahead of the game with its implementation of GPU-powered cloud clusters. For applications such as machine learning and deep learning, these new clusters with GKE running on top of them are the way to go.

You can run a full machine learning cycle faster in the cloud. Since you no longer have to invest in expensive hardware like the TX2 from NVIDIA, you can develop AI models and create big data analytics tools designed specifically for your business requirements without spending a fortune or wasting too much time.

Kubernetes is even designed to work with deep tech runtimes inside containers. For instance, you can run data lakes and feed large chunks of information to a machine learning cluster, all within Google’s cloud ecosystem. Furthermore, it is possible to leverage Google’s AI and big data APIs to augment your own AI and speed up the machine learning process.

GKE vs. EKS – More Deep Tech Support

Speaking of support for native APIs and tools, GCP provides big data and machine learning tools already supported by GKE. With TensorFlow specifically, there are a lot of things you can do to keep your cluster efficient.

Start by setting up Cloud TPU for model training. First, be sure you’re using GKE version 1.13.4-gke.5 or above. Specify the version by adding the –cluster-version parameter to the gcloud container clusters create command. You also need to use TensorFlow 1.13 or above. Specify the TensorFlow version for Cloud TPU in your Kubernetes Pod spec with the following command: tf-version.cloud-tpus.google.com: “x.y”. You can only create a GKE cluster and node pools in a zone where Cloud TPU is available. 

You also need to create Cloud Storage buckets to hold your training data and models in the same region as your GKE cluster. Go to the Cloud Storage page on the Cloud Console and configure a new bucket with a unique name, standard default storage class and the corresponding region to do this. Then give Cloud TPU read/write access. 

When using Cloud TPU with GKE, your project uses billable components of Google Cloud so make sure that billing is enabled as well as the following APIs:

  • Cloud TPU API
  • Compute Engine API
  • GKE API

Now, you can either use either an official TPU model that has been containerized in Docker images, or build and containerize your own model to get started. For the full guidelines to optimizing Cloud TPU for GKE, continue reading here.

GCP vs. AWS – Additional Tools and Market Share

As well as having more freedom to reconfigure a cluster in AWS, the platform also excels at delivering additional complementary tools to Kubernetes users. GKE may be the most suitable for running resource-intensive Kubernetes clusters, but you may have to rely on third-party tools for specific functions and added features outside of Kubernetes. Maintaining the cloud ecosystem itself is also slightly more challenging.

When you need to house data in databases, for instance, Google’s Cloud SQL isn’t the most efficient solution to choose. Amazon RDS is a more stable offering in comparison to Cloud SQL which still has components in beta. Amazon also provides the stateless query tool AWS Lambda, which leverages cloud computing at a whole new level by offering instant queries of large databases.

Other tools provided by Amazon are more easily implemented within the ecosystem. They are designed to work with each other natively, so setting up Amazon CloudWatch agents for individual containers, for example, is just as easy as monitoring your VMs through the AWS Management Console.

The above reasons,  and many other factors, are why many companies are already using AWS for their existing cloud environments. As of February 2020, a study by Stackrox reports that AWS continues to dominate the container space at 78% market share, but GCP is tightening the race for second place against Microsoft Azure (39%) moving up from 28% in Spring 2019 to 35% this year. 

The State of Container and Kubernetes Security, 2020

As such, a lift and shift move to another cloud provider is likely too excessive for now. 

If for example, like Cherre, you are in the prime position of working on a new greenfield position, then start with GKE to optimize the superior long-term cost effectiveness and speed. If not, then continuing with AWS and optimizing EKS is no particular hardship. The platform is a technically excellent cloud provider.

At Cherre, despite the comprehensive offering of additional tools and services on offer from AWS, our workloads run on GKE for productivity and efficiency factors. In addition, we benefit most from its cost effective impact on our OPEX. 

Choosing between the two managed Kubernetes service offerings will require balancing the above factors to best suit your individual requirements. However, for maximum Kubernetes performance and cloud-native support for the containerization framework, we believe that GCP has the edge for now.


Stefan Thorpe

Head of Engineering @ Cherre ▻ Cloud Solutions Architect ▻ DevOps Evangelist 

Stefan is an IT professional with 20+ years management and hands-on experience providing technical and DevOps solutions to support strategic business objectives.


References:

Google Cloud. (2020). Google Cloud Platform Pricing Calculator. [online] Available at: https://cloud.google.com/products/calculator [Accessed 2020].

The State of Container and Kubernetes Security. (2020). [ebook] StackRox, p.13. Available at: https://security.stackrox.com/rs/219-UEH-533/images/State_of_Container_and_Kubernetes_Report.pdf [Accessed 22 Feb. 2020].