VAST Data has made generally available versions of its all-flash storage arrays that support the Container Storage Interface (CSI) for connecting external storage systems to Kubernetes clusters.
Jeff Denworth, vice president of products for VAST Data, says organizations that have adopted containerized applications tend to be among the most progressive IT organizations. VAST Data is making a case for eliminating the need for hard drives altogether by relying on solid-state drives (SSDs) based on quadruple level cell (QLC) flash memory that is significantly less expensive than the flash memory employed in the first generation of SSDs widely employed in enterprise IT environments. At that price point, it becomes economically feasible for IT organizations to finally replace spinning magnetic media in the form of hard drives that often fail with SSDs, he says.
SSDs based on QLC flash memory historically have been considered only reliable enough for consumer applications. However, VAST Data mitigates that endurance issue by also taking advantage of storage-class memory (SCM) in the form of Intel Optane memory based on 3D XPoint technology to ensure persistence. The SSDs and Intel Optane memory are connected via an NMVe interface to create VAST Data’s Universal Storage architecture. The operating system developed by VAST Data then makes it possible to logically group multiple arrays to create a single pool of storage resources that can be assigned to any container pod.
The result is NFS over an RDMA interface that can stream data into single containers at nearly 9GB per second at throughput rates that are four times faster than a legacy network-attached storage (NAS) system, according to the company. At that rate of speed, VAST Data can support the data access demands of even next-generation artificial intelligence (AI) applications built using containers, Denworth says.
As more stateful applications based on containers begin to be deployed in on-premises IT environments, competition across storage vendors that support CSI is becoming more fierce. Containerized applications represent a new class of workloads that tend to be latency-sensitive. Existing storage systems often are not able to support the I/O requirements of hundreds of containers and pods trying to access the same pool of data simultaneously.
It may be while before hard drives completely disappear from the data center, but it would appear the writing is now on the wall. Besides the physical amount of space consumed by hard drives, IT organizations are always anxious to reduce the amount of energy being generated within a data center environment. The challenge, of course, is finding the funding required to replace all those hard drives with SSDs.
Of course, as more containerized applications get deployed in production environments, the pressure to replace hard drives will only increase. Developers still value application performance above all other metrics. As such, IT operation teams will come under increased pressure to meet those application performance expectations.
In the meantime, IT leaders should endeavor to make sure hard drives don’t wind up being the weakest performance link in the data center chain long before developers put that infrastructure to maximum test.