April 29, 2017

Docker containers are cool, they help you be agile and they consume resources more efficiently. That you know. But here’s the burning question: Will containers save you money? That’s what I recently asked Mark Balch of Diamanti. Here’s what he had to say about containers and total cost of ownership (TCO)—and how containers compare to virtual machines cost-wise.

Balch is VP of Products at Diamanti, a startup whose main offering is a container appliance for the data center. His company’s main goal is to convince businesses that they can save money to migrating workloads to Docker containers—especially if they do it in turnkey fashion, as the Diamanti container appliance is designed to facilitate.

One of Balch’s talking points in advocating for container adoption is lower TCO. Here’s what he had to say in a conversation on the topic.

Christopher Tozzi:  Start by telling us a little bit about your outlook on where containers are today from an operations point of view. Obviously you are among a class of vendors trying to bring container operations up to a level of maturity to give IT operators a simpler way to serve developers containerizing their applications.

Balch: The ability to operationalize containers is really uneven right now, for reasons of both maturity within the ecosystem and user awareness.

There is a tedious process around just deploying your container stack and picking tech and making decisions. And that applies even if you’ve chosen to go with a commercial vendor like Red Hat or CoreOS or Mesosphere. You still have to go through the work of setup and getting things to work properly. That is not a one-time event, that’s an initial event, and then quarterly or some interval you have to modify things because the technology is still changing.

Today, the container ecosystem poorly supports networking and storage and requires manual intervention. So that’s fundamentally at odds with the fast life cycle of containers—both the frequency of being spun up and down, and the total number of containers that have to be managed. What a lot of early production, container users are experiencing is that networking and storage is too manual to reason with.

Tozzi:  How do you see early days of container operations mapping back to early days of virtual machines?

Balch: One of the interesting comparisons to note with virtual machines versus containers is that VMware had a good five years (1999 to 2004) where they were the only game in town. VMware really baked the technology and focused very much on enterprise and scale requirements.

With containers, Docker was the first on run-time and packaging, but they were not first to market on orchestration. They were doing the classic open-source platform play to grab as much market share and then figure out how to monetize later, but Kubernetes and Mesosphere have moved much faster on the management layer. That’s resulted in a lot more choice with container management, but also a real Wild West phenomena that adds to a lot of the complexity of containers.

In the early days of VMware … it was a developer toolset for replicating and consolidating physical servers, for general-purpose applications (and small databases). It did take many years for VMware to get users to trust VMware with databases of any scale in a production environment. Even still to this day, most database vendors recommend bare-metal deployments when performance and scale are the primary requirements.

With containers, there’s no out-of-the-box standard for high-performance storage or networking that is typically required to run databases at scale. So all of that has to be done manually, just like it’s been done manually in the VM world. And then those pre-provisioned resources have to be linked or connected one at a time, to every container running a stateful instance.  So in a sense, trying to take an entire application with its stateful components and put it into a container gives you the worst of both worlds: you’ve got to deal with the complexity of manual complexity from the VM world and you’ve got to figure out how to make it all work with this brand new container technology that is still maturing.

The good news is that these problems are not a secret within the container community. Whereas 18 months ago or two years ago all people talked about with regard to containers was web and stateless app servers, for the last couple of years there’s been a groundswell of activity with stateful applications. You see Docker’s CEO talking about it at DockerCon, Mesosphere talking about real time data pipelines, Kubernetes talking about it. Everyone’s talking about it, everyone’s looking at it—so that’s really good.

There are also emerging standards like CNI (container networking interface) that’s a starting point for integrating broad network services and network provisioning with Kubernetes and Mesos. We also have a further-behind but growing activity around standardizing storage interfaces—for example, FlexVolume with Kubernetes. All these different projects have been releasing capabilities specifically designed for stateful applications. For example, Kubernetes has Stateful Sets, Mesosphere supports multiple kinds of storage, Docker is a little further behind but working on it.

Tozzi:  When you look at the TCO of containers versus VMs, how much do you think operations factors in today?

Balch: It factors in hugely.

Containers represent a developer insurgency—they’re like barbarians at the gate. Shadow IT is not a new concept, but it was the first iteration of developer insurgency. Taking weeks to get a VM was unacceptable, so developers starting going to AWS and signing up with a credit card.

Containers allow those same developers to move even faster because even on AWS, standing up a VM takes minutes, whereas containers take mere milliseconds. So developers can scale their application and deploy resources much faster than both traditional VMs on public clouds as well as private cloud.

But the TCO of containers gets blown up when you try to use manual processes from the old VM world. There’s an important recognition that containers represent the modern application life cycle, which is about much greater agility and velocity from deployment to production. As that life cycle accelerates with containers, it exposes operational weaknesses that had been latent in IT all along. If you’re driving a slow vehicle down a bumpy road, it’s an inconvenience, but it’s ok. This is the traditional six- to 12-month release cycle for your ERP systems and things like that. But when you want to go to highway speeds, it fundamentally becomes unsafe and you have business risks in trying to get that application life cycle accelerated when you’re dealing with operational processes simply not built for that world.

Christopher Tozzi

Christopher Tozzi has covered technology and business news for nearly a decade, specializing in open source, containers, big data, networking and security. He is currently Senior Editor and DevOps Analyst with Fixate.io and Sweetcode.io.