A question I am often asked, is will containers replace virtual machines? With containers being lightweight, portable and with fast startup time, it would seem to be a no brainer. While containers are not all that entirely new, the technology has been evolving rapidly in the past few years, especially with Docker and recently with Microsoft entering the marketplace. As with all emerging technologies, some components don’t get as much attention until the technology becomes well adopted by the community. In terms of containers, security and management are areas that need more attention.
Similar to the adoption of public cloud, security is a topic that comes up during many deployment discussions. Docker has published some guidelines for securing your Docker deployments. One question that remains is, how good is Docker’s secure model compared to well established hypervisor vendors? In traditional hypervisor models, the guest operation systems are completely isolated from each other and the hypervisor layer handles interactions between the guests and the host OS. While it’s technically feasible that through a hypervisor vulnerability, the host OS could get compromised and infect all running guests, this scenario is highly unlikely and hasn’t occurred to date. Note that there have been hypervisor vulnerability announcements from vendors such as Citrix. When deploying container technology such as Docker, system administrators need to pay special attention to permissions of the Docker daemon, such as not using “root”. As with any deployment, administrators need to fully understand the security model of the architecture, implement documented best practices and follow vendor vulnerabilities announcements. There will continue to be debate for some time on how secure containers are and could be a leading reason behind the slow adoption for production usage.
With containers and microservices, developers are splitting up their applications and services into smaller containers that are interconnected. While this new architecture provides benefits of redundancy, portability and improved continuous integration services and delivery at the component or microservice level, it adds complexity to the operations staff supporting the entire solution. Take log management as an example, operators need to monitor the logs and performance stats of all the containers to get a holistic view of the health of the application. Managing this might seem simple for a few containers, but when the deployments reach hundreds if not thousands of containers this can become an operational nightmare. Fortunately, with the release of Docker 1.5, Docker introduced the Docker Stats API. With this API, Docker provides access to endpoints that vendors can pull critical status such as CPU Usage, Memory Usage, Network I/O and Disk Utilization. Many vendors such as Logentries, Loggly, and Logstash have been connecting into these endpoints to provide centralized logging and stats. In addition, Docker has a Remote API that can be used for not only basic commands such as start, stop, restart but to capture container logs and determine the health of the containers. With mature technologies such as virtualization, there is no shortage of management tools. While these API may not be the answer, they provide the interface for third party vendors to build powerful monitoring solutions that will fill in the gap for container management.
Container technology is changing and innovating on a daily basis but it still has work to do to catch up to virtualization. That’s not to say the companies should not develop and deploy their applications in containers. In fact, they should take a good hard look at doing just that. Docker has a strong ecosystem that continues to expand with leading industry vendors. Let’s not forget Microsoft, which will be releasing container technology called Nano Server in Windows Server 16. It’s just a matter of time before containers become the mainstream deployment platform for applications.