Google Expands Container Cloud Services

As a cloud service provider Google is running a comfortable third place behind Amazon Web Services (AWS) and Microsoft Azure. But as IT organizations begin to shift to microservices based on containers, Google is betting its Google Container Engine (GCE) service will propel it ahead of both rivals.

Google said it will update GCE to add not only support for version 1.8 of the Kubernetes container orchestration platform, bu

t also a raft of additional networking and management services that make containers equal citizens to virtual machines on the Google Cloud Platform (GCP). Those services and capabilities include:

  • Startup times for a Kubernetes cluster are now 66 percent faster. A five-node cluster can now start in less than one minute.
  • An early access program provides access to Highly Available Masters. Now, masters and nodes can be deployed in up to three zones within a region for additional protection from failures. GCE automatically rolls over to backup masters and nodes when required.
  • Node Auto-Repair is now generally available. GCE can auto-repair nodes using the Kubernetes Node Problem Detector to proactively repair nodes and clusters.
  • Node Auto-Upgrade is generally available. IT organizations can specify maintenance windows for upgrading nodes.
  • Custom metrics on the Horizontal Pod Autoscaler are now in beta. IT organizations can scale clusters on any metric, not just CPU utilization.
  • Cluster Auto-scaling is generally available, up to 1,000 nodes with up to 30 pods in each node as well. In addition, IT organizations can specify a minimum and maximum number of nodes for your cluster. Clusters can scale up and down as required.

GCE also now can invoke all the capabilities of the software-defined network (SDN) via its own application programming interfaces (APIs). Those capabilities include:

  • Aliased IP support is in beta for new clusters. But support for existing clusters is planned. GCE clusters can be connected over a peered virtual private cloud (VPC).
  • Google Cloud Load Balancing is now available for GCE to enable the building of distributed services.
  • Shared-VPC support will soon be available in in alpha. Multiple GCP projects can be connected by department or application.
  • Managed CUDA-as-a-service leverages the latest NVIDIA graphical processor units (GPUs) to run applications based on machine learning algorithms.
  • GCP storage is now accessible for Apache Spark applications on Kubernetes.
  • CronJobs are now in beta. IT organizations cans schedule cron jobs such as data processing pipelines to run on a specific schedule.
  • Ubuntu node image is in beta. But Container Optimized OS (COS) remains the default node image.
  • Third Party Resource (TPR) is being replaced by Custom Resource Definition (CRD). CRDs are a lightweight way to store structured metadata in Kubernetes.
  • Role Based Access Controls (RBAC) is now generally available. This feature provides fine-grained regulated access to compute or network resources.
  • Network Policy Enforcement is now being implemented via a beta release of an instance of open-source Project Calico network virtualization overlay.
  • TLS Kubelet client cert rotation is in beta. IT organizations can now automatically rotate Kubelet certs so they have a shorter lifetime to enhance security.
  • Node Allocatable is generally available. This capability protects node components from out-of-resource issues.
  • Priority/Pre-emption is available in alpha clusters. This enables IT organizations to give certain pods processing priority over others.
  • The Google Container Engine user interface has been extended to provide more insights into resources, as well as improved integration with Stackdriver and Cloud Shell. Organizations can view and edit YAML files directly in the UI. There are also shortcuts for the most common user actions.

GCE is now also available in Frankfurt, Northern Virginia and São Paulo. Tom Hockin, principal engineer for Kubernetes and Google Container Engine, says Google will become the preferred provider of Kubernetes as a cloud service because it has extensive experience managing Kubernetes for the past 14 years. Google donated Kubernetes to the Cloud Native Computing Foundation (CNCF). Google also now provides per-second billing starting at a base minimum of one minute.

However, while Google is making containers an equal citizen to virtual machines at an API level, it still requires virtual machines to deploy GCE because it is not satisfied with the level of isolation provided by containers deployed on bare-metal servers. Unlike AWS, Hockin says Google is also encouraging IT organizations to deploy Kubernetes locally to create a true hybrid cloud computing environment versus forcing IT organizations to refactor their applications. Those local instances will make it easier to lift and shift applications into a GCE.

Long term, Hockin says the goal is to eliminate IT administrator involvement in managing Kubernetes. The ability to automatically upgrade Kubernetes clusters is a major step in that direction, he says. It’s also a significant point of differentiation because Google claims to be the only cloud service provider to offer this capability.

Google is not the only cloud service provider focusing on Kubernetes. But as IT organizations become more familiar with Kubernetes, chances are many of them are going to prefer to learn from the master.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1621 posts and counting. See all posts by Mike Vizard