Google Adds Container Sandbox to GKE Advanced

Google this week announced it is making available in beta an additional layer of isolation on the Advanced Edition of Google Kubernetes Engine (GKE) managed container service based on the open source gVisor sandbox project it launched last year.

Yoshi Tamura, product manager for GKE and gVisor at Google, says Google is trying to encourage enterprise IT organizations to deploy more containerized applications on top of instances of Kubernetes deployed on top of its virtual machines. The open source gVisor project creates a lightweight container runtime that serves to further isolate container applications running on the same Kubernetes cluster.

Tamura says that, for now at least, Google intends to make gVisor sandboxes available only on the advanced services edition of GKE, which is usually consumed by IT organizations looking to deploy containerized applications in a production environment.

While Google has decided to employ gVisor to add another layer of security for a Kubernetes platform it invented, there’s a fierce debate in the container community over the degree organizations might be able to rely on gVisor or lighter-weight virtual machines as an alternative to legacy virtual machines, which add a lot of operational overhead in the form of guest operating systems to a container environment. Google has yet to reveal its ultimate strategy going forward, but as interest in bare-metal instances on Kubernetes rises, many IT teams are looking for a way to isolate containers in the most efficient way possible.

By making available an instance of gVisor available in beta, Google at the very least is providing enterprise IT organizations with the opportunity to gain some hands-on experience with a new type of container runtime.

Cloud service providers, of course, have made massive investments in building out cloud services based on open source virtual machines that each of them has customized to varying degrees. IT organizations, meanwhile, have tended to standardize on commercial virtual machines from VMware. There is, however, a small community of IT organizations that have opted to deploy Kubernetes on bare-metal servers to reduce operational overhead and reduce their virtual machine licensing costs. In addition, virtual machines don’t lend themselves to being deployed on graphical processing units (GPUs) and field-programmable gate arrays (FPGAs).

The debate over how to combine the best attributes of virtual machines and containers is far from over. But as emerging technologies such as gVisor become more accessible to enterprise IT teams, many more will start asking questions about the future role of legacy virtual machines in the era of cloud-native applications.

In the meantime, IT organizations should be paying attention, if for no other reason than to lower their Kubernetes costs—the more application workloads running per Kubernetes cluster, the lower their infrastructure costs will be. That alone may be a sufficient enough reason for many IT organizations to consider GKE to be a more economical alternative to rival Kubernetes services. Whatever the path forward, the more application workloads start to show up on Kubernetes the more Google seems to be able to offer.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1617 posts and counting. See all posts by Mike Vizard