Going to the Cloud? Go Containers

Errorlessly deploying a modern application to the cloud is, more often than not, a Herculean task. To undertake that journey one release cycle after another, and reach your goal every single time, you need something which is high-performing, dependable and consistently successful. While VMs continue to be the poster child, containers and microVMs are looking to lead the coming era.

The Ever-So-Reliable Virtual Machines

A virtual machine, as VMware famously defined, is a software computer. Abstracting the hardware, VMs give you the computing power and environment your applications need while optimizing the underlying physical computer or a server to run multiple VMs at the same time.

For enterprises moving from on-prem data centers to the cloud, this brought previously unseen advantages: centralized network management; optimized server usage; multiple OS environments on the same machine, yet isolated from one another; consolidation of applications into a single system; saving money; optimizing DR; and more.

For new adopters of cloud, such as enterprises with deep-rooted legacy systems or monolithic applications, VMs offered significant benefits. But, as the years passed, it was clear that VMs aren’t always ideal. Each VM having its own OS made them larger and slower to boot and added to their RAM and CPU cycles.

Meet the Lightweight, Fast, Dependable Containers

Containers—orchestrated by Kubernetes, Docker Swarm and their ilk—abstract the OS, providing a way to run applications on multiple isolated systems, while sharing an OS, often binaries and libraries too. This made containers lightweight; they are often in megabytes and take a few milliseconds to start. In fact, you might be able to put twice or thrice as many applications on a single server with containers, as compared to VMs.

Much like VMs abstracted hardware, containers abstract software. Much like VMs took away the burden of server management, containers significantly reduce software overheads—bug fixing, patch updates, etc.—as they need to happen for one OS instance rather than for each instance in case of VMs. In fact, today, Kubernetes-orchestrated containers often run on top of VM-based infrastructure, as much of enterprise IT is still VM-based.

But, more recently, among progressive application engineering teams, containers have become the most preferred way to deploy applications in a multi-cloud environment. Especially for microservices-based applications, containers had distinct advantages over virtual machines across cost, efficiency, flexibility and speed of execution. Containers also made possible the ability to create an efficient and portable environment for development, testing and deployment.

Yet, they lack the unassailable security of VMs. Containers have had to bear the consequences of process-level isolation, unlike boundaries of hardware virtualization that VMs had. I don’t mean to imply that containers are not secure—they have container-level, cluster-level and kernel-level security.

Meet the MicroVM: Looks Like a Container, Acts Like a VM

MicroVMs are hardware-isolated lightweight virtual machines with their own mini-kernel. They offer security from hardware virtualization as with VMs, with the agility of containers. The main difference between containers as we know them today, and microVMs is that the latter offer hardware-backed isolation within a Kubernetes container pod.

MicroVMs automatically hardware-isolate vulnerable/untrustworthy tasks to protect the rest of your environment. They are isolated from both other microVMs and the operating system—making sure any attack is contained in the microVMs and not affecting any other part of the application. Even in attacks that surpass host and network-based security—as sophisticated attackers of today are often able to do—microVMs make sure that the endpoints are secure. By the same model, microVMs can also protect sensitive applications and prevent data loss by only providing as much access to other systems or data as necessary. So, you can run both trusted and untrusted tasks in a single system without the worry of the latter destructing the former.

Yet, microVMs are unlike traditional VMs in that they are not full machines but “just enough” machines. They leverage the hardware virtualization of VMs within the context of application containers. They only access a small part of OS resources and other processes, ensuring there is no loss in speed and performance as a result of increased security.

Even though Bromium started the conversation around microVMs in 2012, it’s only this year that their momentum has picked up. Tools such as AWS Firecracker and Google’s gvisor have slowly joined the enterprise application engineer’s toolkit, yet microVMs are still unorthodox—showing great potential, yet untested.

Find Your Sweet Spot

The cloud, until very recently, was dominated by virtual machines. As more and more applications are deployed to VMs, its shortcomings become apparent. But jumping ahead to the microVM too much of a risk. MicroVM is a maverick—untested, untrusted, having a long way to go before mainstream acceptance.

Containers are your sweet spot. They’re significantly ahead of the traditional VM. They’ve found acceptance from the who’s who of tech—Netflix, Airbnb and the like swear by Kubernetes. Cloud providers are stepping on each other’s toes to make containerized deployment efficient, to say nothing of the dozens of advanced tools available in the market!

In 2020, if you’re not on the container bandwagon, you’re already well behind.

Niranjan Ramesh

Niranjan is a senior product marketing manager at HyScale, an application deployment platform for Kubernetes. At HyScale he works closely with a growing team of developers, DevOps and industry experts solving some of the nuanced problems with cloud and Kubernetes adoption.

Niranjan Ramesh has 1 posts and counting. See all posts by Niranjan Ramesh