D2iQ has added a curated distribution of Kubeflow, open source software that makes it easier to deploy workflows that incorporate machine learning algorithms on a Kubernetes cluster, as an extension to its existing portfolio of automation tools.
Jie Yu, chief architect for D2iQ, says KUDO for Kubeflow will make it easier for IT teams to deploy workloads that include frameworks such as Spark and Horovod on Kubernetes clusters. At the core of KUDO for Kubeflow is Kommander, a role-based tool that provides centralized management, governance and visibility into disparate Kubernetes regardless of where they are running.
IT organizations that are building and deploying artificial intelligence (AI) applications based on machine learning algorithms have embraced containers to simplify building and managing all the elements of what otherwise would be a massive monolithic application that would be too unwieldy to build, update and deploy.
Kubernetes, meanwhile, has become the de facto default standard for orchestrating containers. D2iQ provides a series of tools that automate the deployment of Kubernetes clusters, which are now being extended to include support for Kubeflow.
It’s still early days as far as AI applications and machine learning being deployed on Kubernetes is concerned. However, Yu notes IT teams building these applications typically are working across a fleet of Kubernetes clusters. The teams that build these applications typically don’t have a lot of Kubernetes expertise. KUDO for Kubeflow provides a layer of abstraction that masks the underlying complexity of Kubernetes from those teams, he says.
In general, many teams building AI applications are struggling with how to inject AI models into applications once they are built. IT teams looking to address that issue have adopted best machine learning operations (MLOps) practices that ideally should align with the DevOps practices adopted by application development teams.
Of course, like any other software module, most AI models will need to be updated or replaced as new data becomes available. AI models are typically trained to optimize a very precise sequence of events. As business conditions evolve, however, it may become apparent the AI model deployed no longer delivers optimal results. The more frequently AI models are updated, the more critical it becomes to automate the entire deployment process.
Like most organizations that adopt Kubernetes, Yu says most organizations will underestimate the challenges that stem from managing Kubernetes at scale. However, given the critical nature of the AI applications organizations are trying to deploy, it’s only a matter of time before those organizations look for ways to automate the management of Kubernetes.
A recent survey from Forrester Research study finds 76% of data scientists and IT practitioners expect their use of machine learning algorithms to increase in the next 18 to 24 months. In fact, it’s hard to imagine any cloud-native application going forward that will not incorporate machine learning algorithms in one degree or another.