Statehub Unfurls Service for Moving Data Across K8s Clusters

Statehub, previously known as Replix.io, this week launched a private beta of a managed service through which IT teams can move data more easily between geographically distributed Kubernetes clusters.

Michael Greenberg, head of product for Statehub, says the company’s namesake service is based on a data fabric the company developed for managing stateful data across a fleet of distributed Kubernetes clusters. The Statehub service deploys that data fabric on behalf of IT organizations that need a way to consistently move data between multiple clusters, adds Greenberg. In effect, Statehub converts data management tasks into a service in Kubernetes environments, notes Greenberg.

Statehub provides access to pre-configured storage, replication and networking services running across multiple public clouds that can be invoked as part of a DevOps workflow using infrastructure-as-code (IaC) tools, says Greenberg. Statehub currently supports Amazon Web Services (AWS) and Microsoft Azure with other cloud platforms to follow, he adds. The goal is to make it possible for IT organizations to manage data without being concerned about which cloud service provider is employed or whether they will be locked into a specific cloud platform, says Greenberg.

In addition to enabling IT teams to move data, Greenberg notes the Statehub services will also play a critical role in thwarting ransomware attacks by enabling the rollback of data to a point prior to when an IT environment was infected by malware.

While most of the applications deployed on Kubernetes clusters today are stateless—in the sense that they store data externally—the number of stateful applications that need to access persistent storage directly on a cluster is steadily increasing. Many organizations are looking to unify the management of compute and storage, for example, by deploying Kubernetes clusters on hyperconverged infrastructure (HCI) platforms. The challenge they face is that all that data now needs to be managed. A data fabric provides the underlying mechanism for sharing data across a distributed network environment as part of an overall data management framework.

It’s not quite clear who within IT organizations will be managing data. In some instances, data operations (DataOps) teams are creating data pipelines that connect to applications being developed by DevOps teams. In other cases, traditional IT administrators are assuming responsibility for managing storage in Kubernetes environments alongside the existing storage platforms they already manage. One way or another, as the volume of data in Kubernetes environments continues to increase, the need to automate the management of it all will become a more pressing issue.

In the meantime, IT teams need to ask themselves the degree to which they want to build and maintain specific capabilities that their organization may not need to use regularly. Many functions that IT teams once managed are now readily accessible as a service. IT teams need to honestly assess what IT functions they need to manage versus relying on a service that is managed by a set of IT professionals that are clearly specialists in that area.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1604 posts and counting. See all posts by Mike Vizard