In his most recent GigaOm article, “We’re finally headed towards autonomous, self-migrating containers for cloud-application automation“, David Linthicum discusses the value that containers, such as Docker and CoreOS Rocket, are bringing to DevOps and cloud systems management. For the record nothing Dave states in the article is incorrect with regard to the expressed value that containers provide for portability, however, there is an issue with the big picture view discussed for the role of containers in cloud application delivery.
Containers are predicated on the goal of deploying and managing n-tier application designs. By their nature, containers manage n-tier application components, e.g. database servers, application servers, web servers, etc., at the operating system level. Indeed, portability is inherent because all operating system and application configuration dependencies are packaged and delivered inside a container to any other operating system platform. Containers are preferable to virtual machines here because they share compute platform resources very well whereas virtual machine platforms tend to acquire and hold resources on a machine-by-machine basis.
But, architecturally speaking, we have learned that n-tier applications have inherent limitations. They are designed to scale up with very little focus paid on scaling down and no attention paid to scaling out or in. They typically are rife with single points of failure and tend to manage their own state via the use of cluster-style computing. Each tier of the n-tiered architecture must be scaled independently of the other tiers. For example, the database tier has different scaling mechanisms and techniques than the application or web server tiers. Failure in these architectures are provided through redundancy of servers that peer with each other to handle failover situations. They leverage proprietary TCP/IP ports to communicate sometimes requiring change to firewall rules to support. In short, n-tiered applications are very expensive to build, operate and maintain.
This is very different than cloud- and web-scale design applications. Cloud-scale applications by nature are stateless with any application state being managed by cache or database services. The compute unit of measure is the process not the CPU, which enables greater scalability. They are typically built on languages with runtimes that operate across multiple operating system and platform-as-a-service environments, such as Node.js, Ruby, Go, Java, etc. so proprietary lock-in is not really a major issue. Cloud-scale apps leverage HTTP/S as their primary means for communication. They scale across network architectures easily allowing businesses to run in private data centers and leverage cloud for excess capacity when needed.
Cloud-scale applications actually require fewer custom components to deploy making them easier to manage over time. This is in direct contrast to the vision Dave lays out at the end of his article where the container sprawl can easily lead to hundreds of containers running across a diverse set of hardware architecture. This may actually end up costing twice what the application costs to operate today and may actually suffer from more outages even though these environments are highly automated. That is, the individual containers become easier to deploy but results will be undefinable when deploying into nebula of containers.
The unfortunate reality is that, in the short-term, re-platforming n-tier applications using virtualization and containerization will continue as a trend for foreseeable future. Skills to redesign applications to leverage cloud models are limited and it seems that we are racing toward disposable code versus good architecture.
The good news is that the disposable code era supports use of cloud-scale architecture by default, so at least we won’t be adding to the mess. Moreover, I credit Dave for his guiding wisdom that “All things considered this could still be a much better approach to building applications on the cloud. PaaS and IaaS clouds will still provide the platform foundations and even development capabilities.” However, with the hype that exists around containers right now, it’s important to point out that containers are not the panacea the market represents them to be.
JP Morgenthal is an internationally renowned thought leader in the areas of IT transformation, modernization, and cloud computing. JP has served in executive roles within major software companies and technology startups. Areas of expertise include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. He routinely advices C-level executives on the best ways to use technology to derive business value. JP is a published author with four trade publications. Hist most recent is “Cloud Computing: Assessing the Risks”. JP hold both a Masters and Bachelors of Science in Computer Science from Hofstra University.