Docker and Legacy Applications: Better Together?

It’s next to impossible to have a conversation about where enterprise software development is going and avoid the topic of containers or more specifically, Docker. The interest and enthusiasm for containers has shot through the roof, and as with every “new” technology, the camps have formed. “I love it!” “It’s just another fad!” “I’m unsure!”

No matter in which camp you find yourself, Docker should not be ignored. It addresses a real set of problems and there’s enough investment flowing in to the ecosystem to make it viable.

The commercialization of containers is still early, and the naysayers love to point out the current deficiencies. There are some parallels with Bluetooth. When Bluetooth first hit the stage pundits said it wouldn’t survive. “It’s not secure” (remember Bluejacking?), “It operates over too short of a distance to be practical”, “There’s not enough support across devices”, just to name a few. End users found great value in being free from cords. Today, Bluetooth is an expectation for every phone, car, and accessory (headphones, speakers, and selfie sticks included). A search on Amazon.com for Bluetooth yields almost one million results. Successful new technologies address real problems, have broad industry support, and investment funding innovation across the ecosystem. Containers meet these requirements.

Technology companies are maturing containers’ core technology and venture capital will continue to fund new companies filling in the gaps. As this maturation happens, developers and operations engineers will realize increasing value from containers.

Cloud providers such as Amazon, Google, IBM, and Microsoft provide services (native or soon to be native) for running containers. (This author’s own company, Skytap, runs containers alongside VMs as a service for dev/test cloud environments as well.) All this adoption makes it a cloud technology, right? We’ll, sort of. It is cloud friendly but not cloud exclusive. Containers are also moving beyond Linux. Microsoft recently released the latest Technical Preview of Windows Server 2016 which now sports native container support.

Some believe today that containers are only for new applications, but this is absolutely false. There is a cornucopia of legacy applications that, for various reasons, cannot be moved to the cloud. Many of these applications are great candidates for containerization. Now before you go all idealistic on me, I’m not suggesting the entire app be stuffed into a single container and call it a day. Nor am I suggesting the entire app be rewritten from scratch. There is a middle ground—evolution.

I suggest a very high level four-step approach for evolving legacy applications to make use of containers. However, before getting started, there are a few prerequisites.

  • First, you are going to need a cross-functional team of architects, developers, testers, and operations engineers. You’ll want to pull together your smartest people. It sounds cliché, but this is going to be hard work and you’re laying the foundation for others in your organization to build on.
  • Second, you want the team sitting together; this is going to be a very fluid and dynamic endeavor. You won’t be successful if these teams have to schedule meetings to get time with one another.
  • Third, work with the team to establish project tenets. These should be three to four guiding principles that the team will use to make decisions. It’s essential that the team is involved in crafting these tenets as they need to buy into them and feel ownership for them.
  • Lastly, establish a learning mentality. There are a lot of unknowns and a lot of new concepts and old concepts that need to be thought about differently. The team needs space to explore.

Once you’ve met the prerequisites you’re ready to go. While every legacy application environment and team must address unique requirements, think of these four steps as a starting point for devising your own approach.

Step 1: Inventory the major functions of your application. Finding the right level of granularity may require a few iterations. Two functions are probably too few and 200 too many. A few examples of the appropriate granularity are authentication, user/account management, payment processing, search, and recommendations.

Step 2: Create the architectural pattern for how the new and the old will talk to one another. In doing this you want to limit the investment on the legacy side. A pattern I’m seeing become common is the use of a message queue to act as the bridge between the old and new. Container networking is still under heavy development. Using a message queue adds a little complexity to your architecture but it saves you from having to make an early bet on container networking technology. This step also must include the deployment architecture. Which flavor of Linux will you use? Remember, kernel version matters. How will you handle service discovery and orchestration? How will you pass secrets to containers? How will you capture and correlate logs? You’ll need to answer these and many more questions.

Step 3: Pick the easiest, least risky function to refactor. Use this to learn containerization from a technology and process perspective. How does it change your pipeline, and from the perspective of running and managing containers? You’ll learn how to manage container “versions” moving through the pipeline, how to integrate the container build process into you’re overall build process, how to track which containers are running where, how to monitor container health and utilization, and much more.

Step 4: Once you feel confident, and you’ve worked out the bugs and kinks—get it into production. Watch and learn. What’s going well? What incorrect assumptions did you make? What stuff requires additional tuning? Iterate as quickly as you can to address the things you got wrong—and don’t feel bad about getting things wrong! You’re learning. Don’t move past this step until production is at least as stable as before and the team has proven they can take an incremental change all the way through the pipeline in a reliable and repeatable manner.

With a stable production environment and a repeatable pipeline, it’s time to pick the next function/service and repeat the process. I suggest starting back at Step 1 and validate earlier assumptions and decisions and adjust accordingly.

You’re probably wondering how long each step will take. Step duration depends heavily on the complexity of your application and the strength of your team. Don’t worry too much about an extra week here or there. Remember, the team is doing something hard and they’re learning along the way. This requires patience and nurturing—things not always in supply in our business.

The benefits of this approach are:

  • Your team builds new skills. To learn something, you need your hands on it. You need to figure out what works, what does not work, and why. You need to try it in your environment, from development all the way to production.
  • Development and operations will have to work together to understand the technology and figure out the best way to apply it to your environment and application. They will argue and fight about it but keep them focused on the goal, and they will come out a stronger team.
  • Getting started with a new technology that will pay dividends. The old saying “you have to pay to play” is true here. To know what benefits you can reap from containers you need to start using them. There is no prescriptive guidance or instruction manual. You need to try it out and adjust real-time.

This approach isn’t for every application. Some applications have very strict regulatory requirements that are incongruent with unproven technologies. Your choice of which legacy application to apply this to is just as important as your choice of who you put on the project. So is Docker ready for production? You will have to answer that in the context of your company, application, team, and environment.

A colleague and I were recently discussing Docker. He was skeptical about its viability, and asked how many people are running it in production. I said the question is irrelevant as I remember when people asked the same questions about virtual machines. Take a methodical approach, have patience and make adjustments along the way. And most importantly, have fun doing it!

About the Author/Dan Jones

UntitledDan Jones is a director of product management at Skytap Inc., an “Environments-as-a-Service” public cloud offering. He has held key product and program management roles over a 20+ year career at Nordstrom, Microsoft, IBM Rational and others, with a focus on better integration of development and IT operations for faster software delivery with high performance and reliability.

One thought on “Docker and Legacy Applications: Better Together?

Comments are closed.