What to consider before adopting Docker as part of your Enterprise DevOps Strategy

Examining the potential benefits of Docker and containers and how to build a solid foundation for their implementation.

Andrew Phillips, VP of DevOps Strategy, XebiaLabs

Docker is a format for lightweight virtual machines called “containers” that offers several compelling benefits over other VMs. Docker containers are very easy to share between different teams, or even organizations. They can be run in an identical fashion wherever you like, from a developer’s laptop to a production environment. They’re convenient for local test environments, especially if you’re developing microservices (partly because they’re much more efficient to run in terms of resources), and link up to each other, so you can easily run multiple containers on a single machine.

If different parts of your application stack are owned by different people, you may want to consume a base container or a base image for a vanilla MongoDB instance, for example, but then you may want to tweak it for a particular application. Traditionally, that’s proven to be very tough with virtual machines, but Docker offers solid reusability and extensibility. You can have one team provide a base, with different application teams layering their code and configuration on top.

Find the business benefit

There’s always risk involved with adopting a new system or technology, so don’t move forward unless the business case for your organization is clear. Will containers deliver greater efficiency, cost-savings, or a boost to employee satisfaction? What does your organization stand to gain from containerization? Make sure you understand what it can and can’t deliver, so that you set realistic expectations.

Assuming you’re sold on the benefits of Docker, how do you plan to implement it? There are several things you need to look at.

A supporting framework

If you are going to adopt Docker, then you’ll need some kind of framework that allows you to define multiple containers and orchestrate them. You might choose to use something like Kubernetes, Docker Compose, Marathon, or Mesos. You need a way to describe sets of containers that belong together and some assistance from a framework that can connect them to persistent storage and to each other, so they can talk to each other, store data, etc.

A runtime platform

You’re also going to want an operational platform that allows you to manage many containers running on multiple underlying machines in a single place, as opposed to just having a Docker binaries installed on a bunch of VMs that you have to SSH into to check for yourself. A runtime platform can provide insight into the underlying machines so you understand which containers are running, which containers talk to each other, and allows you to put in place the access control and auditing ability you need. You might consider Triton, Rancher, StackEngine, or Docker Universal Control Plane.

Storage and networking

There are a lot of frameworks and plug points now that allow you to attach different classes of enterprise storage, from Amazon-based cloud storage, to local disks, to more traditional networked enterprise storage solutions. Portability is better nowadays, but consider how to handle it if your application needs to write to disk.

Most container frameworks have some overlay networking models, but they may not meet your needs, particularly if you have very high bandwidth networking requirements. If your chosen framework isn’t up to the task, then you’ll need a plan to come up with a different, likely custom approach.

Cross-cutting concerns

New technologies initially focus on the big functional problems, but you’re going to need to think about many other challenges. How will you handle monitoring, access, control, and auditing?

Delivery process

You’ll also want some kind of technology to support your delivery pipeline. There are various dedicated CI and CD platforms that describe themselves as “container-native,” meaning that you put the container in at the beginning and the platform runs it through your pipeline and into production. You may already be using other enterprise pipeline orchestration tools like XL Release, in which case supporting your container strategy may be as simple as replacing some of the steps that used to log into a server and run Puppet or Chef, with a call to Docker or your chosen framework or operational platform.

You may also need to make sure that you’re equipped to handle containers from multiple teams. How will you manage a pipeline like this? How will you stay on top of all the dependencies?

A hybrid setup

With all the existing systems out there today, most enterprises will be running a hybrid setup, with some applications or parts of them containerized, and others not. Given the state of container-native tooling today, this represents a real challenge. How will you handle deployment of an application where some components of an application are in containers and others are not? What happens when a fully containerized application needs to talk to other non-containerized services? This needs careful consideration and planning if it’s to work properly.

If you do decide to adopt Docker, make sure that the supporting tools, platforms, and frameworks you choose offer the visibility, scalability, and control that you need.

 

About the Author/Andrew Phillips

AndrewPhillipsAndrew Phillips, VP of DevOps Strategy for XebiaLabs, the leading provider of software for Continuous Delivery and DevOps. The author is a cloud, service delivery and automation expert and has been part of the shift to more automated application delivery platforms. He contributes to a number of open source projects including Apache jclouds, the leading cloud library, and is a co-organizer of the DynamicInfraDays container community events. He is also a frequent contributor to DevOps Magazine, DZone, InfoQ, CM Crossroads and SD Times.