Kubeflow 1.0 Advances AI Adoption on Kubernetes Platforms

An open source set of Kubeflow tools designed to make it easier to build and deploy machine learning models on Kubernetes clusters has formally reached a 1.0 milestone.

Animesh Singh, chief architect and program director artificial intelligence (AI) and machine learning platforms at IBM, says Kubeflow will play a critical role in making it easier to for IT teams to create AI models that already make extensive use of containers. AI models, in general, consume far too much data to be constructed in a monolithic fashion, he notes.

Elements of Kubeflow that have been included in the 1.0 release include a user interface, Jupyter notebook controller and web application, Tensorflow Operator (TFJob) and PyTorch Operator for distributed training, kfctl for deployment and upgrades, and a profile controller and UI for multiuser management. There is also an Operator framework designed to make it easier to deploy, manage and update Kubeflow on Kubernetes clusters.

The goal is to make it possible for data scientists to use Jupyter notebooks to develop models and then employ Kubeflow tools such as fairing, a software development kit based on Python, to build containers and create Kubernetes resources that are used to train AI models. Once a model is constructed, a KFServing tool is provided to create and deploy a server for an inference engine on which the AI model runs in a production environment.

The biggest challenge many organizations currently face is embedding AI models within applications and then continuously updating them as new data sources become available or the assumptions that went into the building of the model become less relevant. Building and updating AI models is now generally referred to as machine learning operations (MLOps), which in turn will need to be incorporated into a set of best DevOps practices.

Singh says it’s apparent Kubernetes will become the de facto standard on which most applications infused with AI models will be deployed. By working with Google and other contributors to Kubeflow, IBM is looking to accelerate the rate at which AI models are employed in production applications. Thus far, that rate of adoption has not been as extensive as many proponents of AI had initially hoped. A recent IBM survey of 4,514 businesses in the U.S., European Union and China finds 75% of respondents either have deployed some form of AI (34%) or are exploring it (39%). However, another recent survey of 1,062 business and IT executives conducted by PwC finds only 4% of respondents say they plan to deploy AI enterprise-wide in 2020, a significant drop from the 20% who said they planned to do so in 2019.

Kubernetes and Kubeflow won’t resolve the challenges associated with incorporating AI into business processes. However, they should go a long way to reducing the cost of experimentation by making it easier to build and deploy AI models. The real challenge now is going to be laying down the Kubernetes foundation on which most AI innovation is about to be built.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1614 posts and counting. See all posts by Mike Vizard