Redis Labs Simplifies Data Microservices Platform Deployment

For some time now Redis Labs has been making a case for a data services framework that could be deployed anywhere. Whether in support of microservices based on a Docker container or in a platform-as-service (PaaS) environment, the company contends IT organizations should employing a NoSQL database management system that runs in memory.

Redis Labs this week bolstered that strategy by making it easier to deploy the Redis Enterprise database on top of a Cloud Foundry PaaS curated by Pivotal using the BOSH tools embraced by the Cloud Foundry Foundation (CFF) to simplify  deployment of its PaaS. Cihan Biyikoglu, vice president of product management for Redis Lab, says the goal is to enable DevOps teams to deploy a database with a single click that, from the ground up, is designed to scale up and down to meet the needs of microservices making large numbers of calls to a database.

As microservices continue to evolve, IT organizations will contend with multiple flavors of them. In some instances, a microservice will be a short-lived application that can be hosted easily and almost anywhere. But in other instances, a microservice might be a long-running application that requires access to resources that will enable it to scale dynamically. In those instances, Biyikoglu says most organizations will prefer to rely on a PaaS designed to support persistent applications at scale.

Given the need for multiple deployment options, Biyikoglu says there’s a clear need for an in-memory database that can provide a consistent mechanism for accessing data across multiple application deployment scenarios. In fact, as computing becomes more distributed thanks to the rise of Internet of Things (IoT) applications, for example, the need for a distributed database running in memory will only become more pronounced. With that issue in mind, Redis recently expanded the zero-touch deployment capabilities of Redis Cloud Private, an instance of its database that support multi-cloud replication across multiple regions.

It’s unclear to what degree the rise of microservices applications will force companies to rethink their database strategy. It is clear, however, that given how latency-sensitive most microservices are, there’s a marked preference for being able to access as much data as possible residing in memory. The issue with most traditional databases is that the amount of data that can be stored in cache is limited. However, most organizations already have made significant investments in legacy database systems that they are reluctant to replace. Because of that issue, many organizations may find themselves managing multiple types of distributed databases in support of multiple classes of applications.

In the meantime, competition among database vendors looking to provide the data layer needed to support microservices applications is already fierce. Less clear, however, is to what degree developers are driving that decision-making process versus more traditional database administrators (DBAs). Regardless of who ultimately makes that decision, it is now only a matter of time before someone in the IT organization needs to make it.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1620 posts and counting. See all posts by Mike Vizard