Logicworks Dissects Docker Destiny, Explains Statements, Q&A Style

Logicworks makes three claims about the future of Docker and practical Docker utility in the enterprise. ContainerJournal examines those claims.

Cross-System Deployment, Migration, and Automation Magic

Logicworks Claim #1: “Companies will actually use Docker to deploy across multiple systems, migrate applications, and remove manual reconfiguration work. Hybrid clouds bring supposed flexibility benefits, but many enterprise applications are burdened by internal dependencies, network complications, and on-premises database clusters. Docker fills the gap and makes managing security and scalability across multiple complex systems attainable.”

Being the inquisitive folks we are, ContainerJournal had to poke around this panacea for reviving application wishes and extending hybrid cloud dreams into reality.

ContainerJournal: In what way are so many enterprise applications burdened by internal dependencies, network complications, and on-premises database clusters? What is the burden specifically?

Jason McKay, Senior Vice President / CTO, Logicworks: The vast majority of enterprises have hundreds of applications, many of which are dependent on a common set of app infrastructure, packages, or “bridge” applications. When the applications work smoothly, this isn’t an issue; but whenever something fails, discovering the origin of the failure is exponentially more difficult when a) application boundaries are not discrete and b) enterprise teams have no map of application dependencies, leading to significant time spent combing through logs to discover the basic architecture of the program. Any change, even a small one, requires rebuilding and retesting of the entire application, because a small change may have cascading effects. This is a widely-acknowledged issue that the industry is attempting to solve, whether that’s through a total microservices transformation, or through a combination of refactoring plus Application or Service Dependency Mapping (SDM) software services that allow users real-time discovery and visualization of all the interactions of your application with its underlying app infrastructure.

ContainerJournal: How does Docker make managing security and scalability across multiple complex systems attainable?

McKay: Some of this is Docker; some of this is just building your application to conform to a microservices model or merely with well-defined module boundaries. To take a very simple example, upgrading your OS in a monolith can be a multi-day or week process. Why? Because you have to understand which part of your application depends on which now-outmoded models. It’s much easier to update to the latest version of your OS when you can update it in a single module (or in a single Docker base image) that you know will not have ramifications beyond the boundaries of your module/container and then deploy that base image to a single Docker cluster (perhaps 10% of your compute power) to maintain availability while testing the new OS. Theoretically, the fact that containers are “self-contained” means that they should be able to run “anywhere”, since the developer is abstracted away from the kinds of complications described above.

When Docker Puts Its Shoulder to DevOps Culture Transition, Change Happens

Logicworks Claim #2: “Companies will use Docker to push a DevOps culture transition forward. Docker’s recent acquisition of SocketPlane is just a part of its plan to make the software enterprise-ready along with major upgrades in networking that allow containers to communicate across hosts. Developers can eliminate their worst enemy—manual system configuration work—by provisioning Docker containers, running tests against them, and deploying them to production in minutes.”

That Docker will contribute to a DevOps culture shift is quite believable. But how does the SocketPlane play figure in? Lacking further comment from Logicworks, I will say that anything that removes moving parts, eases implementation and support, and makes DevOps generally more attractive for adoption will aid a DevOps cultural and mindset shift. The SocketPlane acquisition does this for Docker.

Technically, SocketPlane enables containers to connect to each other absent any network controllers.

Docker Doubles, Triples, Even Sextuples Workload Density

Logicworks Claim #3: “Save on resources and time. Containers and virtual machines both emulate multiple hardware systems that are isolated from each other, but VMs each have a full operating system while containers share the host OS. The time and resources saved by leveraging containers can have a big impact on application density—some estimates show you can run twice the workload using containers than Xen or KVM VMs.”

ContainerJournal: What specific impacts does this have on application density?

McKay: Containers allow for much greater server density by removing redundant OS elements from the containers themselves. We have not tested the limits of application density. It’s not that complicated: you are sharing a Linux kernel (maybe XMB) so instead of having X times the number of VMs on your hardware, you’re running 1 times X. These estimates really depend on how big the kernel is plus how big your base image is.

ContainerJournal: Do your own due diligence as “your mileage may vary”.

David Geer

David Geer’s work has appeared in ScientificAmerican, The Economist Technology Quarterly, CSO & CSOonline, FierceMarkets, TechTarget, InformationWeek, Computerworld, Byte.com, ITWorld.com, IEEE Computer Society’s Computer magazine, IEEE Distributed Systems Online, Government Security News, Laptop, Smart Computing, Technical Support, The Hosting Standard (Canada), TechWorld.com (UK), SIGnature, Processor, and the Engineering News-Record. David served as a technician at CoreComm in Cleveland, OH prior venturing into writing.

David Geer has 24 posts and counting. See all posts by David Geer