IBM Doubles Down on Red Hat Kubernetes Storage Platform

IBM and Red Hat today reveal the core technologies within Red Hat OpenShift Data Foundation (ODF) will become the foundation for the next generation of the IBM Spectrum Fusion storage platform.

Scott Baker, chief marketing officer for IBM Storage, says it’s clear the storage technologies used to create Red Hat ODF are going to be applicable to a wider range of use cases beyond cloud-native applications running on Kubernetes clusters. Those core components include an instance of the open source Ceph object storage operating system, the Rook orchestrator for Kubernetes and the NooBaa data management platform.

Brent Compton, senior director for Red Hat Storage, says the goal is to build a unified storage platform that bridges cloud computing and on-premises IT environments to better enable bi-directional application and data mobility.

IBM has multiple storage platforms, but this transition clearly signals that Ceph is going to become the basis of its storage strategy going forward. IBM says it will take over responsibility for the future development of Ceph, Rook and NooBaa projects in addition to all sales and marketing activities.

It’s not clear how widely adopted Red Hat ODF is, but Compton says the rate at which stateful applications are being built on Kubernetes clusters running in both the cloud and on-premises IT environment has greatly accelerated. In fact, many of those stateful applications are being deployed on edge computing platforms to process and analyze data at the point where it is created and consumed.

There is no shortage of options when it comes to Kubernetes, but the biggest impediment to achieving hybrid cloud computing is the fact that data is not easily moved from one platform to another. In fact, most applications today are deployed within the confines of a single cloud or on-premises IT environment, which naturally results in a lot of data silos that are managed in isolation from one another. As a result, the total cost of IT only increases each time an organization opts to deploy an application on a different platform and these data silos prevent most organizations from truly achieving hybrid cloud computing success.

It’s not clear how long it might take IBM to realize its vision for eliminating the data silos that impede true hybrid cloud computing, but as more applications access data strewn across multiple cloud platforms the issue is becoming a bigger concern. Applications are not only invoking microservices to access data across a highly distributed computing environment, but the types of data being accessed are also becoming more diverse.

In the meantime, organizations are investing in hiring data engineers alongside DevOps teams to implement best practices for automating the management of diverse storage systems. The expectation is a set of DataOps best practices will be defined that, in many ways, mimic the processes created to automate application development and deployment. The goal should be to modernize data management in a way that simplifies what is currently a highly fractured storage environment.

Naturally, none of those goals are going to be achieved overnight, but it’s apparent that, as object storage systems mature, there’s a lot more opportunity for progress.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1604 posts and counting. See all posts by Mike Vizard