Univa Partners With Sylabs to Advance Singularity Containers

Univa and Sylabs have announced a partnership under which the open source Singularity container technology developed by Sylabs will be supported in high-performance computing (HPC) environments running Univa workload management software.

Singularity containers are already widely employed in HPC environments as an easier alternative to Docker containers. Rob Lalonde, vice president and general manager for Navops at Univa, says support for Singularity containers will extend the scope of Univa’s Navops platform to make it easier to configure and manage clusters of all types used across HPC environments.

Lalonde says Singularity and Docker containers are being embraced widely in HPC environments as companies look to use cloud bursting to dynamically add capacity or to shift workloads into public clouds to employ machine learning algorithms running on graphical processor units (GPUs), for example.

Singularity containers are gaining momentum into those environments because they are not only much easier to set up, but they also have more advanced support for persistence storage such as the Infiniband storage network, which is critical for HPC applications.

Specifically, Singularity containers allows developers to package applications and their dependencies, including definitions, configurations, metadata and security keys, into a single file that is cryptographically verifiable. Singularity also eliminates the need to set up a container daemon runs as a privileged user and provides built-in messaging passing interface (MPI) support, direct access to specialized HPC hardware from within containers and compatibility with container images pulled from DockerHub.

Many of those organizations are even employing Singularity and Docker containers in a nested fashion, says Lalonde. That approach makes it possible for HPC environments to mix and match container engines as need be, notes Lalonde.

Lalonde estimates that somewhere between 30 percent to 40 percent of the Univa customer base has already embraced containers to one degree or another. Most of those organizations are not using containers just yet to deconstruct HPC workloads into a series of granular microservices. Rather, most of the initial container focus is on portability, says Lalonde.

Historically, HPC environments have tended to eschew any form of virtualization. Most of them did not want to sacrifice the processing horsepower to run virtual machines. But containers provide a much lighter-weight approach that HPC organizations are using to drive application workload migration rather than trying to share multiple application workloads across multiple physical machines.

HPC environments rapidly are emerging as one of the first examples of how multiple container engines that support a common standard for running container images will be deployed side by side over time. In the future, it’s probable container engines that are specifically optimized for specific classes of applications will be deployed alongside more general-purpose container engines such as Docker.

In the meantime, Lalonde notes that increased reliance on containers is also increasing the need to implement DevOps processes within HPC environments. As is often the case in any IT environment, the primary issue with DevOps these days is not so much the underlying technology as much as it is changing the processes and culture of HPC organizations, which always have been naturally cautious when it comes to embracing any emerging technology.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1612 posts and counting. See all posts by Mike Vizard