In the last five to six years, software container technology has dominated the data center, from small-scale private data centers to large-scale enterprise data centers and public clouds. Along with this innovation, HPC (high-performance computing) and artificial intelligence (AI) adoption have grown within enterprises to support a massive amount of data generated by various devices in different industry verticals. The key reason to focus on such large amounts of data is to generate analysis and real-time actionable insights.
Recently, several proofs of consent and work have been done in which it was evaluated that containers can be used to handle HPC workloads to address important pitfalls that were not addressed by traditional HPC workloads management platforms. Kubernetes, de facto container orchestration platforms and Docker container technology have emerged as frameworks that can power HPC workloads and enable several features for large-scale computing workloads. In this article, let’s see how containers and HPC can amalgamate.
Everyone in the data center and software technology industry is well aware of containers. Some advantages that containers bring are:
- OS-level virtualization that works on bare metal.
- Package all the different types of dependencies to run the application that can be isolated from other containers in the same OS.
- Containers bring portability of application across the different environments.
- The initiation of containers is lightning quick as compare to VMs that enables the quick rollout of the software stack for new services and to speed up overall delivery.
- Containers boost DevOps pipelines.
- Supports microservices architecture to further enable agility and scalability for applications.
Advanced technology is adopted to such an extent where most of the industries (health care, industrial, automotive, aviation, etc.) supported use cases that generate a huge amount of data per day. For example, the self-driving car is set to generate 40TB of data every eight hours, and industrial machinery will be producing 100TB of data per day for processing, storage and production of analytics. But to process such a high volume of data, it is not possible to have a single system that has enough computing power, storage and infrastructure.
Here, HPC comes into the picture that leverages distributed compute and storage resources to solve complex issues with large data volumes. HPC clusters are commonly known as Supercomputers. Complex algorithms are used on large data sets to generate insights. HPC systems use a large set of CPU or GPU in parallel architecture that creates enough of a computing resource pool to execute complex mathematical algorithms.
Currently, HPC systems are widely used for scientific research, military operations, astrophysics, big data analytics, finance, cybersecurity, weather/climate and bio-informatics. With HPC systems, we can get the result of mathematical simulations in minutes rather than hours or days. HPC is largely adopted by enterprises in their private data centers. Also, subscribers of public cloud vendors can get HPC on Azure, AWS and Google Cloud platforms.
How Containers Improve HPC
HPC workloads are typically monolithic in nature. Traditionally, HPC applications run for large data sets across the data center. The main advantage of having containerization of applications lies in the portability features of containers. We can package HPC applications and run across the clusters to deal with a large set of data.
Containers power microservices architecture that have their benefits for application services. Any application built with microservices-based methodology tends to have a chunk of small services that can be packaged in containers. All those services maintain their life cycle along with service-specific requirements of independent development, granular scaling and patching and fault remediation.
In similar ways, HPC application workloads can get the advantage of isolated management that includes scaling and development. This scaling capability of containers is important, as HPC workloads may face a situation when there is a spike in the data processing requirement and it should carry without any downtime of services. HPC applications deployed in containers can scale independently to tackle such spikes.
Additionally, when containers are used in microservices architecture, they get used for speed, scalability and modularity. HPC involves containers with a large set of several libraries and software components along with complex dependencies to run high-end applications. Containers, in this case, help HPC workloads to hide the complexity and make the deployment easier.
How HPC Improves Containerization
Initially, containers were designated as incompatible for HPC workflows. Now with the development of some open source projects, new ways to use the containers for HPC workloads have been invented. Some of these projects are Singularity, Charliecloud, Shifter and Podman, among others.
Currently, containers are mainly orchestrated by Kubernetes, which can run containers continuously. HPC workloads execute and are completed to perform assigned tasks—for example, a simulation for financial data or genomic workflows. Running HPC workloads in containers benefits containers themselves to perform the task regularly and invokes containers when needed by HPC systems.
To evaluate the co-integration of containers with HPC, various studies have been conducted and some open source and commercial solutions released. Kubernetes and Singularity have been adopted for handling HPC workloads by large-scale enterprises, while monolithic-based HPC must be distributed to further support a large pool of data and containerization has become important to enable dynamic orchestration, portability and scalability. We will see more advancements in containers and HPC domains as time progresses.