Kasten Adds Transformation Framework to Manage Kubernetes Data

Kasten this week updated its data management platform for Kubernetes to make it possible to capture more granular events when migrating or backing up data.

Company CEO Niraj Tolia says version 2.5 of the Kasten K10 data management platform adds a Cloud Native Transformation Framework to capture additional data and metadata to automate data transfers and improve overall reliability. New capabilities include support for parallel data transfers, lock-free algorithms, pluggable encryption and compression, advanced deduplication and smaller fault domains.

Metadata, meanwhile, is being employed to automate migrations of workloads and data transfers between Kubernetes clusters while also ensuring that backups have been successfully completed.

Kasten has been making the case for a more application-centric approach to data management in cloud-native environments that makes it simpler to apply policies to data. The inherent ephemeral nature of containers requires a more dynamic approach to managing data than what traditionally has been employed in monolithic IT environments. In addition to support for Kubernetes application programming interfaces (APIs), Kasten K10 enables automatic application discovery, multi-cloud mobility and integration with a wide variety of databases.

Tolia says Kasten K10 is unique in that it is designed to scale up and down as needed, which reduces the overall size of its IT infrastructure footprint. In that sense, Kasten K10 is essentially serverless, he says.

Now that data in Kubernetes environments is reaching critical mass, data management issues spanning multiple clusters are starting to come to the fore. Historically, many IT teams have employed separate tools to manage and protect data. However, in the age of the cloud, it’s become apparent that data protection should be an element of a more comprehensive approach to data management that includes, among other things, being able to move data into and out of a cloud service more easily. That shift is being driven in part by the rise of more stringent compliance mandates that require organizations to show they have retained control over their data at all times.

Less clear is to what degree IT teams will conclude they need separate data management tools for Kubernetes environments. However, given the overall complexity of an IT environment made up of microservices and the sheer volume of data that now needs to be managed, it’s only a matter of time before IT organizations re-evaluate their approach to data management.

In the meantime, as the rate of application workloads deployed on Kubernetes clusters inside and out of the cloud continues to increase, it’s also clear the weight of data gravity within organizations is starting to shift. There is still a massive amount of data in on-premises IT environments. However, the amount of data being stored in the cloud continues to grow at an exponential rate. What many IT organizations have not come to fully appreciate going forward is just how much data will be going back and forth between those environments as IT continues to evolve in the age of hybrid cloud computing.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1615 posts and counting. See all posts by Mike Vizard