IBM Promises Software-Defined Storage for Containers

IBM this week announced it will deliver, in the second half of this year, software-defined storage (SDS) embedded in containers that can be deployed anywhere.

Eric Herzog, vice president for business development and go-to-market for the IBM Storage Division, says IBM will also use that opportunity to include data protection capabilities with the general parallel file system (GPFS) upon which IBM Spectrum Fusion storage software is based.

In addition, IBM is also updating its IBM Elastic Storage System (ESS) family of storage systems via two models that increase capacity by 10% and double read performance, respectively, compared to predecessors.

The first iteration of IBM Spectrum Fusion will come on a hyperconverged infrastructure (HCI) system running the Red Hat OpenShift platform that is based on Kubernetes. In early 2022, IBM plans to release a software-only version of IBM Spectrum Fusion.

Herzog says that as hybrid cloud computing continues to evolve, it will become more critical to move data to the compute layer versus trying to process data in a cloud. Edge computing platforms are going to need to be able to process and analyze data in real-time. A federated approach to managing highly-distributed data will be required, notes Herzog.

In fact, International Data Corp (IDC) notes the number of new operational processes deployed on edge infrastructure will grow from less than 20% today to over 90% by 2024. IDC also estimates that, by 2022, 80% of organizations will increase spend on edge infrastructure by a factor of four in support of real-time applications infused with AI capabilities.

IBM last month unveiled a 1u all-flash storage system for on-premises IT environments that can scale to hold 1.7 petabytes of data; enough to fulfill the requirements of IT organizations that are training AI models in an on-premises IT environment.

In general, the ability to move data between multiple clouds and on-premises IT environments has become a critical requirement as the centers of data gravity in the enterprise continue to shift. Organizations need to be able to flexibly move and replicate data that needs to be accessed by a growing number of applications running on different platforms. It’s not always feasible or practical to remotely access data when many of the applications running are increasingly latency-sensitive, thanks in part to increased reliance on microservices.

It’s not clear to what degree data and storage management might ultimately converge in the age of the hybrid cloud. However, making sure the right data is in the right place at the right time will become more critical as microservices are distributed across a hybrid cloud computing environment. In theory, machine learning algorithms, along with other advances in AI, should make it easier to achieve that goal without always having to rely on a data engineering specialist to manually configure data pipelines.

In the meantime, however, IT teams would be well advised to reevaluate their existing approaches to data and storage management, as a new generation of microservices-based applications comes to the fore.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1618 posts and counting. See all posts by Mike Vizard