Containers’ Impact on Application Modernization

Container platforms have become an integral part of any hybrid cloud landscape, accelerating multi-cloud adoption in enterprises. Containers can be deployed in any cloud (IBM, AWS, Azure, Google etc.) or on-premises. In fact, most of the public clouds have container platforms as a service.

Containers help in infrastructure cost optimization. According to Docker, the most prominent container provider, an enterprise can save 40% or more on its infrastructure spend by moving from virtual machines to container platforms. Container management platforms such as Kubernetes (K8S) have made deployment and management of containers even easier. Containers reduce the infrastructure footprint needed to host an application drastically. Additionally, microservices have become the most prominent application architecture pattern being adopted to build cloud-native applications. Containers help realize many of the 12-factor characteristics of microservices, making container technology an essential part of microservices-based transformations. Container platforms inherently drive automation resulting in optimized development, management and operations of applications for the cloud.

Some industries that have high regulatory and compliance requirements are hesitant to move their workloads to the public cloud. Container platforms get them ready to start their cloud journey by starting the move on-premises, to be shifted to any target at any point in time. With cloud-native development on a platform as a service (PaaS), such as RedHat OpenShift Container Platform (RHOCP) and Pivotal Cloud Foundry (PCF), the adoption of container platforms has gained significant traction, and with a good deal of success.

Engineering Approaches to Application Containerization

Once an enterprise makes the architectural decision on the target container platform, it is important to establish and enable container-specific engineering practices for the practitioners as well as other relevant stakeholders. Microservices architecture, DevOps engineering practices, telemetry aspects, security-first mindset and best practices in container application design and development are key to successfully implementing a containerization strategy.

Base images are the foundation on top of which applications are packaged in the container world. App development comprises building the application using a chosen technology (e.g. Java, NodeJS, Python, etc.) and packaging the application binary into a container image that can be deployed and scaled out in the target container platform. Continuous integration and continuous deployment (CI/CD) capabilities are critical in this life cycle of container-based application development and release. Enterprises should have a secured private repository that holds all certified images needed for application development. Security scanning and vulnerability checks should be embedded in the CI/CD pipeline. The entire software development life cycle is optimized, with container adoptions resulting in higher speed to market and dynamic scaling of applications.

What Does a Containerized World Look Like?

With the adoption of container technology and a well-engineered DevOps program, enterprises can move their application workloads to any target. Container platforms have built-in application management and operations capabilities, which results in:

  • Enterprises embarking their journey to cloud.
  • Reducing the infrastructure footprint.
  • Improving the availability of applications.
  • Increasing the development and deployment speed of application.
  • Reducing human interaction for operations.
  • Reducing security-related issues.
  • Autoscaling of applications.
  • Having cloud burst capabilities.

While containers bring in a lot of benefits, there are certain challenges that the enterprises need to plan for while starting a containerization journey:

  • How to right size the container platform?
  • Is the container platform being used optimally to bring down infrastructure cost?
  • How to ensure container platform security?
  • How to ensure only secure images are published to the enterprise repository?
  • Are the images built following best practices? Do we end up creating huge monolith images?
  • Do we have the right number of base images?
  • How to manage the life cycle of the images?
  • How fast can applications be moved to containers?
  • Can the current application SLAs be met while being migrated to containers?
  • Is a big team needed to manage the container platform?
  • Has the release process changed to support the velocity that container technology brings in?
  • Is it possible to effectively monitor a container platform?
  • Have we ended up in a container mess, aka proliferation of images?

Continuous Measurement of Business Value

A deep dive of the application development, build and deployment process in a traditional VM-based environment versus that of a container world will help in determining the key parameters that can be measured to derive the benefits of containerization. The below diagram (figure-1) depicts a typical software development lifecycle (SDLC) flow in a traditional VM environment.

Figure 1: CI/CD in a traditional VM environment
Figure 1: CI/CD in a traditional VM environment

If a well-defined CI/CD pipeline is in place, the CI pipe ensures that the application binaries are built per standards, the functionality exposed are unit-tested and code quality and application security scans are done before the binary is published to a repository. The CD pipe will pick the appropriate binaries and deploy them to the target VM-based environment. The deployment typically happens to a runtime that is installed on top of the VM Guest OS. Additional infrastructure is typically needed to enable functions such as load balancing and security. In this model, the setup enforces separation of duties, which results in separate development, operations, security and infrastructure teams (in making cases drawing a wall between these teams).

The below diagram(figure-2) depicts the application life cycle in a containerized environment.

Figure 2: CI/CD in a containerized environment
Figure 2: CI/CD in a containerized environment

A container base image typically consists of an operating system and additional libraries built on top of it (such as a runtime) to support the application that will be embedded in the container. A new custom image is usually built with the application binary embedded on top of a base image, pulled from a secured enterprise image registry (with pre-scanned and approved images). The step of “image scan” ensures security and compliance of the new image, underlying OS, runtimes and any required libraries on top of the application, thus ensuring security integrity of the built image. This is done before the containers are deployed, resulting in minimal security- and compliance-related disruption during operations.

With these two models as a baseline, the following are the key areas that can be monitored and measured to record the benefits of containerization.

Infrastructure Optimization

A container platform can be set up on virtual machines or bare metal. Assuming VMs for simplicity of comparison, the number of VMs required for the container platform—the container platform capacity—is determined by the number of containerized applications and the resources (compute, storage, memory) required by them. If you take, as an example, a typical JEE application that needs to be highly available, the deployment model will have an application server cluster that spans across a minimum of two VMs. Though the resource utilization across the two VMs may be pretty low, this is required to attain high availability. In the container world, the same application can be containerized with a base image of the app server and deployed to a container platform with two containers that essentially may use only part of a VM. In the VM world, we may need additional infrastructure such as a load balancer to distribute the load between the two VMs in which the application is deployed. However, a container platform has built-in capabilities (e.g. services in K8S) to realize these common functions, which helps to reduce the infrastructure footprint.

The following KPIs can be measured to track this benefit of containers:

  • Infrastructure cost.
  • Number of VMs/ physical servers.

Application Deployment

As seen in the models described above, the time to build an application binary is the same. In a container model, additional steps to build the container image and scan the built-out images are introduced. While these additional steps appear to make the build process longer, the time taken by these steps are pretty much negligible (milliseconds to very few seconds). The container-based deployments bring in added benefits that help reduce a lot of manual effort or custom scripting effort that follows in the VM-based deployment. A good example is the environment-specific configuration for an application. This is typically managed in config files or “.properties” files in the traditional deployment and would require custom scripts to be invoked from the CI/CD orchestrator or, in many cases, executed manually. This adds on to the deployment and verification time, resulting in extended deployment windows as well as exposure to additional human errors. On the other hand, in the container model, the platform provides built-in capabilities (e.g. ConfigMaps in K8S) to overcome some of these additional steps and risks, thus reducing the deployment time and errors drastically.

The following KPIs can be measured to track this benefit of containers:

  • Deployment time per app.
  • Deployment window per app.
  • Downtime related to deployment per app.
  • Number of features released per month.
  • Number of failed or unusable deployments.


In a VM environment, security and compliance aspects impacting the operating system, runtimes and libraries (e.g. java libs) need to be monitored outside the purview of applications. Remediation needs to be taken mostly outside the scope of the application build and deploy, resulting in considerable amount of downtime while activating the fixes. This would also introduce longer windows of vulnerabilities to the application. In a container-based model, this risk is mitigated in the steps of image build and scanning. A container image is a full stack that comprised of the OS, runtimes, libraries and the app binary itself. Any security or compliance concerns impacting any of the layers in the image can be mitigated outside the runtime scope of the application.

The following KPIs can be measured to track this benefit of containers:

  • Number of security-related tickets per app.
  • Number of security-compliance patching related deployments per app.


In a traditional VM model, each release will have an associated planned deployment window that will mandate downtime of the applications. This is to support any manual steps, validations and rollback activities in case a deployment fails. During unplanned situations such as a disaster, even if DR environments are available, it would take some time for the environment to come up and the application up and running. Recovery time is considerably high. Container platforms have inherent self-healing capabilities and ability to run multiple application versions in parallel, helping to reduce the application downtime. They inherently support blue/green deployment and canary/ A-B testing, enabling teams to validate an application version without bringing down the running version. The container platforms also help auto-scaling of applications through the automated policy-driven container creation/destruction. The availability challenges in a typical VM environment is non-existent in a containerized application deployment model.

The following KPIs can be measured to track this benefit of containers:

  • Application downtime per month.
  • Application recovery time.
  • Deployment windows per month.
  • Application availability (99.9,99.99,99.9999..).


The traditional infrastructure model (e.g. VM-based) enforces separation of concerns between development, operations, security and infrastructure teams. The main reason for this is the different levels of privileges required to perform tasks and the risks associated with performing some of these tasks in production (deployed state of the application versus pre-deployment). However, with the container model, most of these tasks are done upfront in conjunction with the application build and deploy life cycle. Advanced automation support inherent in the container platforms is also a key driver to this shift left. The functions of development, operations and application security can be owned by a single team. A full-stack squad with newer roles such as service reliability engineer (SRE) and security consultant along with the typical engineering skills would cover end to end by applying the principle, “Write code, operate code.” While a smaller team of infrastructure/platform engineers needs to be in place to manage the container platform and underlying infrastructure, this team size also can be considerably smaller with all the out-of-the box monitoring capabilities in the container platforms.

The following KPIs can be measured to track this benefit of containers:

  • FTEs per squad.
  • FTE reduction per app.
  • Apps managed per squad.

Some ‘Gotchas’ to Consider

While containerization brings in a lot of benefits, there are certain best practices to be followed. Few of the common ones are:

Container platform capacity: It is important to size the container platform appropriately. The cluster sizes should be determined by closely looking at the resource demands from the application that will be moved to the cluster or the ones that are planned to be built and deployed in the cluster. Oversizing a cluster would result in unused resources, negatively impacting the infrastructure optimization that container platforms bring in. One must plan to have a capacity estimator and a template-driven and configurable cluster creation approach that is automated.

Container image sizes: Proper policies should be enforced for image-building. Define standards around the optimal number of layers in an image, the size of an image and authenticity of base images from which the custom image is being built. Images should be published and distributed through a secured enterprise image registry. The bigger the images, the slower the build and deploy process. This will have a negative impact on the deployment and availability benefits that container platforms bring in.

Logging and tracing: In the container model, applications become more granular through the adoption of newer architectural patterns such as microservices and event-based processing. Transaction traceability will be a challenge if there is no proper co-relation mechanism. This will directly impact the monitoring of the applications and proactive steps to fix issues down the line. It is best to adopt a standardized logging mechanism, have a well-defined correlation mechanism and the ability to visualize log events for easier tracking of application issues, both functional and operational.


Containerization of applications as a modernization pattern is here to stay and will be one of the most important strategic initiatives with enterprises in the next couple of years. While there is a widespread belief that business value through containerization is a given, some of the best practices and KPIs discussed in this article should be well considered as part of the overall strategy. Containerization is indeed an exciting journey, with several new things to learn and implement, while attaining flexibility, speed and cost savings for the enterprise.


This article was co-authored by Joyil Joseph, Chief Architect, Multi-Cloud Tiger Team at IBM. Connect with him on LinkedIn and Twitter.



Sunil Joshi

Sunil Joshi is Distinguished Engineer & CTO, Cloud Application Development at IBM.

Sunil Joshi has 1 posts and counting. See all posts by Sunil Joshi