June 22, 2018

Containerized applications now can access block storage typically accessed by high-performance storage systems supporting enterprise applications based on relational databases thanks to LINBIT.

Available now in beta, LINSTOR is block storage software native to containers and is compatible with both Kubernetes clusters and the OpenShift platform-as-a-service (PaaS) environment from Red Hat, via support for the Container Storage Interface (CSI) being developed by the Cloud Native Computing Foundation (CNCF).

LINBIT COO Brian Hellman says LINSTOR is the latest addition to a portfolio of open source software-defined storage (SDS) offerings that make it possible for IT organizations to employ any underlying storage hardware they want to access block-based storage.

Typically associated with high-performance databases applications that require frequent access to blocks of storage to read and write data, Hellman says it’s now becoming more common to see containerized applications in these environments. At the same time, some IT organizations also are beginning to deploy the database these applications run on in containers, notes Hellman.

LINSTOR takes advantage of DRBD, a part of the Linux kernel that LINBIT pioneered, to replicate data system administrators define the number of nodes to be used from a pool of storage, the number and size of storage volumes and the number of replicas needed. LINSTOR then identifies the servers with the available space to construct the storage environment in way that allows commodity solid-state drives and hard disk drives to be swapped in and out as needed. DRBD already is widely used to enable high availability (HA), geoclustering for disaster recovery (DR) in OpenStack and OpenNebula-based clouds that take advantage of SDS to manage commodity storage hardware, notes Hellman.

It remains to be seen just how many databases might be deployed as containers. It’s not uncommon for some IT organizations to deploy a database on top of virtual machines. But for performance reasons, many organizations still prefer to deploy databases on bare-metal servers. Containers as a lighter weight form of virtualization may be a more attractive option for IT organizations that want to be able to easily move databases from one platform to another.

In general, the bulk of container applications deployed today are stateless. But as various classes of stateful applications are deployed on containers, many storage administrators will be challenged to manage all the I/O requests being generated. In fact, because containerized applications tend to scale up in a less-than-predictable fashion, SDS systems soon may become a requirement as storage administrators move to dynamically make available additional I/O resources on demand.

In the meantime, as developers continue to embrace containers, the number of applications expecting to access some form of persistent storage continues to expand rapidly. While many of those containerized applications initially are being deployed on virtual machines, it’s already being made clear that performance concerns will push more of those applications to be deployed on bare-metal servers using Kubernetes. The challenge then will be finding a way to unify the management of data and storage in those containerized environments.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.