If you’re a DevOps person, chances are good you’ve come across Docker containers. Companies of all sizes are reaping benefits of containers in all areas of the application development life cycle. Containers are accelerating developer onboarding, testing and release management.
Containers: An Essential Tool for Modern Software
By providing mechanisms for software to be easily packaged along with all of its dependencies into runnable units, containers can dramatically accelerate the application life cycle including testing, deployment and upgrades by eliminating variability. Much like interlocking toy blocks, reusable containers can be pulled from various registries or catalogs and be assembled rapidly into fully functional applications. Another benefit is application portability, allowing organizations to run containerized applications on any cloud, virtual machine or physical server with a Docker runtime.
As with any new technology, however, organizations often struggle with where and how to get started with containers.
Public Cloud: Some Pros and Cons
Most public cloud providers offer easy-to-deploy container environments, such as Google’s GKE, Amazon ECS and Microsoft’s Azure Container Service. These services automate deployment and provide easy access to popular open-source container orchestration solutions including Docker native orchestration (Swarm) and/or Kubernetes. In addition, these cloud providers offer rich (but proprietary) web interfaces, CLIs, REST APIs and often SDKs to manage the increasing variety of infrastructure service elements that underpin each environment. These elements include things such as virtual hosts, persistent storage, load balancers, VPNs, firewalls, DNS services and more. Most even provide their own container registry service.
This convenience can come at a cost, however:
- While economical for short-term use, long-term costs need to be carefully considered. Fees can mount quickly, with usage-based pricing applying to each service component.
- Organizations may need to deploy applications closer to local applications or datasets, be averse to storing proprietary software in cloud-based registries or face other policy or regulatory challenges that make cloud deployment challenging.
- The convenient cloud-specific APIs, scripting facilities and value-added services can amount to a “Hotel California” scenario where it becomes difficult and costly to leave a cloud provider.
A Private Container Service
While cloud-based container management solutions have their place, in our view a cloud-agnostic private container service is often a better answer. A private container service should not be confused with an on-premises only approach. A private container service still runs open-source Docker components and open orchestration frameworks such as Kubernetes, but it provides a common toolset for on-premises, cloud, multi-cloud and hybrid cloud deployments. It brings the convenience of cloud-based management tools to both on-premises deployments and public cloud, but without exposing users to the intricacies and nuances of container management approaches on each public cloud platform.
By employing a private container service, infrastructure services including networking, DNS, load balancers and facilities that organizations invariably need as they move to production are placed firmly under their control. The same is true of other facilities like certificate and secret management that required as firms move from experimentation to production. In addition to supporting public or cloud-provider registries such as Docker Hub, Quay.io or Amazon’s ECR, a private container service also has the notion of private registries as well as application catalogs optionally avoiding a dependency on third-party registries.
The Best of Both Worlds?
In our experience, before organizations jump into the cloud with both feet, a private container service is worth a look. Taking this approach does not preclude doing initial deployments in the public cloud, but it helps ensures downstream flexibility.
Using a private container service paves the way toward hybrid deployments, and by providing a common set of infrastructure management services, it provides DevOps teams with more control over their environment and abstracts away the differences between cloud provider-specific toolsets. Organizations benefit from ease of use, while avoiding the risk of being locked into a single cloud provider, registry or orchestration framework.
About the Author / Bill Maxwell
Bill Maxwell is a senior software engineer at Rancher Labs. He has extensive experience in software engineering and operations, and has led continuous integration and continuous delivery (CI/CD) initiatives. Prior Rancher Labs, Bill worked at Go Daddy for six years in various capacities including engineering, development and managing various cloud services product deployments. Bill resides in Phoenix. He holds a Masters in Information Management degree from Arizona State University and has a BSEE in Electrical Engineering from California State Polytechnic University.