Splice Machine, a provider of an open source SQL database optimized for workloads that incorporate machine learning algorithms, has launched a framework for managing its databases that runs on top of Kubernetes.
Company CEO Monte Zweben says Splice Machine Kubernetes Ops Center will make it easier to manage fleets of Splice Machine database via a centralized platform that can be launched initially using open source Helm Charts to deploy the command center platform on a Kubernetes cluster.
Splice Machine is a SQL database designed to scale out in a cloud computing environment that makes it possible to converge analytics and online transaction processing (OLTP) within a single application. It is available as either open source software or via a fully managed cloud service on Amazon Web Services, Microsoft Azure or Google Cloud Platform, or in an on-premises IT environment. Splice Machine Kubernetes Ops Center provides the control plane through which multiple instances of Splice Machine databases can be managed regardless of the platform they are deployed on.
In addition to providing access to an open source edition of Splice Machine Kubernetes Ops Center, theirs is also a Splice Machine Kubernetes Enterprise Edition and along with tools such as Cloud Manager and instrumentation and monitoring tools based on ELK, Prometheus, Grafana and PagerDuty incident management software. Each tool requires a separate license and associated Helm chart.
The core Splice Machine Kubernetes Community Edition for a single cluster is available for free, while an enterprise edition for a single Kubernetes cluster adds security and backup functionality.
While there are many options when it comes to building artificial intelligence (AI) applications, Zweben says SQL remains the lingua franca for launching queries in enterprise IT environments. The core Splice Machine provides a way for IT organizations to build next-generation AI applications without sacrificing decades of investments in SQL, he notes.
Containers and Kubernetes, meanwhile, have emerged as core technologies for building and deploying AI applications in a modular fashion. A monolithic AI application would be simply too unwieldy to build and maintain. That reliance on containers and Kubernetes makes a natural fit for deploying a scale-out database as a whole new IT discipline around machine learning operations (MLOps) continues to emerge, he says. The MLOps processes encompass everything from building and deploying AI applications to managing the underlying databases and infrastructure those applications rely on.
In general, IT organizations are trying to foster cultures of experimentation by creating sandboxes that enable business units to experiment with AI applications, Zweben says, noting those sandboxes need to be spun up and down on a Kubernetes cluster as required.
It’s too early to say to what degree MLOps might transform IT. In theory, at least, every application going forward is going to incorporate machine learning algorithms to varying degrees. The challenge many IT organizations are wrestling with now is how to incorporate AI models built using those algorithms within application environments that, thanks to best DevOps practices, are now being updated far more frequently than most data scientist teams can handle.