Google Extends Hybrid Appeal of Container Engine Cloud

In terms of advancing container adoption, Google has been at the forefront, thanks mainly to its work on developing the Kubernetes container orchestration framework. The company this week moved to make those investments more accessible to a wider range of IT organizations via the Google Container Engine cloud service.

In addition to providing support for the latest version of Kubernetes, Google is also mow making it simpler to employ Google Container Engine within the context of a hybrid cloud computing environment, says Aparna Sinha, group product manager for Google Container Engine. IT teams now can deploy clusters and access resources using all-private IP ranges, in addition to being able to integrate Container Engine clusters with existing networks. There is also a beta release of a function that exposes services via internal load balancing, which allows Kubernetes and non-Kubernetes services to access one another on a virtual private network. In addition, Source IP preservation is now generally available, which makes applications fully aware of client IP addresses for services exposed through Kubernetes.

Google is also adding support for a beta release of an API Aggregation function that extends the Kubernetes API to custom APIs, as well as making available a Dynamic Admission Control in alpha clusters that provides two mechanisms to add business logic to a cluster.

To enhance overall security, the company is adding support for more granular access controls. A kubelet running on Google Container Engine will only have access to the objects it needs. The beta release of a Node authorizer restricts each kubelet’s API access to resources (such as secrets) belonging to its scheduled pods.

Other security-related enhancements include a Kubernetes NetworkPolicy API allows users to control which pods can communicate with each other and HTTP re-encryption through Google Cloud Load Balancing (GCLB), which allows IT operations teams to employ HTTPS from the GCLB to their service backends.

Google also revealed it has automated updates capability for stateful applications built using containers alongside additional auto-repair and auto-update capabilities to the container cluster.

Finally, Google says it is replacing the Third Party Resource (TPR) API with a lighter-weight Custom Resource Definition (CRD) API to store structured metadata in Kubernetes. That move also makes it easier for Google Container Engine to interact with custom controllers via kubectl.

As Kubernetes continues to evolve, the company is positioning the core container orchestration engine as a platform on which IT organizations can opt to deploy a container-as-a-service (CaaS) environment, a platform-as-a-service (PaaS) environment or a serverless computing framework. Many IT organizations will wind up running two or more types of computing architecture on top of an extensible Kubernetes orchestration engine.

In general, Sinha notes that Google is interacting more with IT operations teams in the enterprise as usage of cloud services matures. The issue that many of those IT operations teams are trying to contend with now is that initial selection of the public cloud on which to host applications was heavily influenced by developers. In fact, Sinha says one of the things that differentiates the company most in the cloud is that it has already automated Kubernetes deployments down to a single click of a button.

As IT operations teams start to exercise more influence, Google is clearly betting that more workloads will start to shift in its direction.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1612 posts and counting. See all posts by Mike Vizard