Tackling Docker’s top deployment challenges

There’s ample reason to be excited about Docker, the Wall Street darling whose containerization platform promises to revolutionize the complex nature of IT plumbing, save bundles on infrastructure and make cloud computing even more attractive to CIOs. Experts say that Docker presents the opportunity, for the first time, to write an application and deploy it anywhere. Like any new technology, however, Docker has plenty of growing pains for IT departments and developers.

Common challenges for developers and IT professionals include building images for large applications, logging, monitoring, visibility, orchestration, security and deciding which production tools and methods to incorporate. You can’t boil the ocean in one day, of course. First, understanding the philosophy around Docker and containerization and how it fundamentally changes development and IT management processes is an important first step. Next, IT professionals should get familiar with the growing array of tools to help implement and manage containers. Finally, thinking carefully about which projects are best for your first foray into Docker can help ease complexity.

But even before that, make the business case. A common mistake, as with any brand-new technology, is jumping in before determining the need and potential outcomes for implementing micro-services and containers in the first place. A useful rubric is the economics: will using Docker ultimately save your team money, and/or provide increased flexibility which can support revenue-generating applications? Let’s get started with a few friendly questions to determine your organizational readiness for Docker:
1. What’s a Docker-friendly project? Containerization is perfect for building isolated packages of code that do one thing very well and benefit from easy scalability. Any service that doesn’t rely on local state, such as a cloud application, can be containerized easily. By comparison, databases and storage services are not ideal for containers because the complexity involved in solving for shared state for ephemeral workloads may not be the best place to start for an initial project.

2. Can you handle major change to deployment and management workflows? Implementing containers requires different management processes, which to put it simply, are less structured and more ephemeral. Most enterprise IT departments have established set ways of working over many years, built around legacy technology and departmental silos. Those organizations which are very early in their transition to Agile and DevOps are still working through long release cycles, disconnected teams and hardware-specific management processes. That’s not a great fit for containers, which are of course not wedded to any specific infrastructure resources and require cross-functional visibility and coordination to implement effectively. Unlike the monolithic applications of old, with containers, it’s not easy to quickly zoom in to the underlying server and fix a problem. The container might be living on a virtual machine which was provisioned just hours previously. Tomorrow, the container may live somewhere else in the cloud.

Adopting containers requires tracking the lifecycle of the application along with infrastructure side-by-side: the model of infrastructure as a code. Service management APIs become integral because they allow you to obtain status and make configuration changes gracefully at run time, no matter where the container is running. All this requires rethinking tools, skills and processes. Ultimately, IT organization should focus on removing human error from deployment and management pipelines and enabling programmatic management of the software lifecycle.

3. How mature is your DevOps? As you may have guessed already, deploying and managing containers requires a strong DevOps orientation. Not there yet? Don’t stress. You can begin your journey to DevOps and containers all at once, if you break it into small steps. Let’s think about the old way of doing things: developers write code, toss it over the wall to the operations people who have to figure out how to get it working in production, and then on to testing, where it might be flipped back to development, and so on. This is tedious and doesn’t meet the modern needs for quickly deploying, provisioning and changing applications and their environments on the fly as the business case demands.

Now, everyone on the team must be involved in all of these stages which are occurring continuously: develop requirements, write the code, test it, ship it, start anew. Speed, efficiency and being able to roll back quickly to fix an issue are keys to making all this work. Automation is hugely important, through every step of the way including testing and building the actual container images. DevOps is a major undertaking, so starting small with a manageable, low-risk project is the best way to approach the transition. See the last section below for ideas.

4. Do you need new tools? Having the right tools on hand is paramount, as tools can mitigate the reality of human error and the reality of human adversity to change. DevOps tools and Docker-specific tools can enforce the discipline to work faster, more collaboratively and in ways that won’t impact quality. Start with using automated deployment/continuous integration tools such as Jenkins, TeamCity or even a custom schedule on top of Mesos that can build containers for you. There are an increasing array of Docker-friendly tools, including Mesosphere, Kubernetes, and ZooKeeper, which should be selected based on the use case. Consider what kind of performance matters for your workload, where the business logic does the most work, and then choose tools accordingly. For example: if you’re existing infrastructure is already scaled for processing localized data, then maybe the Kubernetes pod concept is better aligned with your needs. Finally, integrate production tools and processes with automated testing tools so that testing is always running automatically when new services, networks or other elements are introduced.

5. How can you avoid getting burned with the first project? Remember when the experts told you not to throw too many apps in the cloud at once so that you could learn from your mistakes without jeopardizing the business? That is exactly the same concept for moving to containers. Look for a layer in an application with minimal dependencies, through which a container can solve a small yet useful problem. Take a web service that connects with a third-party application to stream data to the user or complete a transaction. You could create a container that serves as a middleman service between your application and the third-party application, thereby reducing the impact of performance issues related to the external vendor’s application or service. When the third-party application goes down or has a glitch, the new container service will cache data and continue to fulfill the request on behalf of the user, even if it means a slightly degraded experience. This “one service at a time” approach to containers can improve performance of an application while experimenting with the concept of services segregation. Even better, this occurs without incurring unnecessary risks, such as new bugs that may arise from making changes to the core application.

When implementing Docker for the first time, look for the low hanging fruit: code that interacts with databases, third-party vendors, and stateless services. By doing so, you will have a much higher chance for success off the bat, leading ultimately, to more high-impact business-driving projects down the road.

About the Author/Erik Blas

Erik BlasErik Blas is a senior cloud architect at Clutch, a leading DevOps consultancy.