Hammerspace Brings Global Data Management to Kubernetes

Hammerspace this week announced it is has added Kubernetes support to its global data management platform.

Company CEO David Flynn says Hammerspace has developed data management software that makes persistent volumes running on any form of persistent storage available to a wide range of types of applications. By adding support for the Container Storage Interface (CSI) defined for Kubernetes clusters, that capability is now being further extended into the realm of containerized applications, says Flynn.

Designed to be accessed as a software-as-a-service (SaaS) application, Flynn says Hammerspace has developed a global namespace system that pushes the control plane for disaggregated storage into the cloud. That approach makes it possible for applications to access persistent storage wherever it’s located, he says.

Interest in building stateful containerized applications that require access to persistent storage has been on the rise for some time now. The challenge many IT organizations face is they don’t always want to have to dedicate storage resources to a specific Kubernetes cluster, nor do they necessarily want to replicate data that already exists somewhere in the enterprise just to make it available to a specific Kubernetes cluster.

The global name space developed by Hammerspace takes advantage of the metadata made available on distributed storage systems to make persistent volumes available to applications anywhere, says Flynn. That approach allows organizations to employ a more granular approach to making data available to distributed applications. In effect, each persistent volume now can be programmatically accessed as if it were just another microservice, he says.

Most storage administrators are still coming to terms with the implications of containers and the Kubernetes clusters they run on. For that reason, Flynn says Hammerspace anticipates most of the demand for its approach to data management is going to be driven by DevOps teams that are being tasked with finding the most efficient way possible to make persistent data available to modern applications. Databases such as MySQL and MongoDB require persistent data to be accessible from any cluster across the hybrid multi-cloud environment. DevOps teams need to be able to safely and quickly test applications using production data from mixed vendor network-attached storage (NAS) environments, he notes.

Eventually, storage administrators will stop thinking in terms of managing boxes in favor of a service-centric approach to providing access to data. Until then, however, DevOps teams building containerized applications may need to find some way to force the issue. By invoking a data management platform delivered as a cloud service, DevOps teams avoid all the traditional challenges associated with acquiring and setting up a data management system without necessarily requiring storage administrators to specifically provision storage on their behalf.

Storage administrators are not necessarily dead set against supporting modern applications. It’s just that providing access to data needs to occur in a way that doesn’t require them to set up additional processes to support what right now are still a relatively small percentage of the total number of legacy applications they still need to support.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1387 posts and counting. See all posts by Mike Vizard