In the minds of many IT professionals, Docker containers are an emerging technology that still needs to prove its mettle in production environments. In truth, Docker containers turned 5 years old this week and can trace their lineage back to the early days of Linux.
Of course, what makes Docker containers different from Linux containers is that they are truly portable between different instances of Linux. Millions of developers have embraced Docker containers as a method to move application workloads easily without having to refactor their applications.
In addition to being used to build portable “cloud-native” applications, the next most common use case for Docker containers is to employ them to encapsulate existing enterprise applications, says David Messina, chief marking officer for Docker Inc. That capability makes it much easier for enterprise IT organizations to lift and shift legacy applications into a public cloud, many of which wind up being more cost-effective to run there because they are not frequently accessed. Messina says that move alone results in return on investments on Docker containers being achieved in less than 90 days.
But while portability has been the hallmark of the last five years of Docker containers, it’s the ability to more easily craft microservices using Docker containers that will result in them being remembered as the beginning of a new epoch in computing, says Messina.
In fact, given how long it takes for new methods of application development to create a critical mass of applications in production environments, Messina says the full impact containers will have on enterprise IT is just now starting to be felt. Most of the applications built using containers are stateless. But there’s a move underway to start containerizing stateful applications that access persistent storage. Most enterprise IT applications are built on top of a databases accessing some form of persistent storage to create a stateful application.
The rise of microservices, in turn, is transforming DevOps. Because it’s now easier to build and maintain resilient microservices based on containers, the way applications are being constructed is changing. In some instances, developers are now even taking over responsibility for managing individual microservices end to end.
Less clear is where all these containers are going to wind up running. Today, most containers run on a virtual machine hosted in the cloud or in an on-premises environment. Alternatively, there large numbers of containers running in a platform-as-a-service (PaaS) environment. Docker Inc. contends that more flexible container-as-a-service (CaaS) environments such as Docker Enterprise Edition (EE) will supplant PaaS environments, while virtual machines will be marginalized as organizations become more comfortable with running containers on bare-metal servers.
IT organizations can run hundreds of containers on a bare-metal server compared to tens of containers running on top of a virtual machine. As the tooling become more robust and isolation issues get addressed, the need for virtual machines will inevitably decline. MetLife, for example, was able to reduce its total costs by more than 60 percent in part by relying on containers to reduce the number of commercial virtual machine licenses it required.
No one can say for sure where containers will be in another five years. Serverless computing frameworks that employ containers to make IT resources available using event-driven architectures are gaining momentum. But the one thing that many will look back and marvel about is just how long it actually did take for containers to finally catch on.