IBM Uses Kubernetes to Run Watson Apps on Any Cloud

IBM this week announced it will leverage Kubernetes to make artificial intelligence (AI) applications based on the IBM Watson platform available on any cloud.

Announced at the IBM Think 2019 conference, this latest IBM AI initiative employs IBM Cloud Private for Data (ICP for Data), a stack of middleware IBM developed on top of Kubernetes, to enable deployment of a series of AI technologies as a set of microservices that can run anywhere.

The AI applications IBM will make available include Watson Assistant, an AI tool for building conversational interfaces into applications and devices using a set of bots developed by IBM, and the Watson Assistant Discovery Extension, which enables organizations to employ algorithms to analyze unstructured data and documents.

IBM already makes Watson Studio and Watson Machine Learning available on ICP for Data. IBM promised later this year it will make available additional Watson services to ICP for Data, including Watson Knowledge Studio and Watson Natural Language Understanding.

All those Watson microservices are orchestrated by Watson OpenScale platform, which IBM developed to manage multiple instances of AI software.

Bala Rajaraman, CTO for IBM Cloud Platform Services, says Kubernetes makes it possible for IBM to leverage a common runtime and management framework to deliver services across multiple clouds. In effect, IBM is now moving compute to where the data is, rather than requiring organizations to move data to where a computing platform happens to be located. In IBM’s case, ICP for Data provides a foundation for delivering a variety of applications that now include the company’s Watson portfolio, he says.

As part of that effort, IBM also revealed this week it plans to make available later this year IBM Business Automation Intelligence with Watson, which will make it easier for developers to embed AI services within their applications.

IBM is taking advantage of Kubernetes also increasingly to move middleware for managing IT environments into the cloud. Application workloads will be highly distributed, but management frameworks will increasingly be consolidated.

In some ways, IBM is simply following where AI developers have already headed—most AI applications are developed using containers to make it easier to update applications that otherwise would be too large to develop and update practically.
Containers make it possible to apply best DevOps practices to the development of AI applications, most of which are being developed on graphical processor units (GPUs) running in the cloud. But the AI models that get deployed frequently wind up being deployed both in the cloud and in on-premises IT environments.

It may be a while before AI becomes pervasive. But it’s clear AI capabilities will become pervasive either by being embedding in the application or layered on a service invoked via application programming interfaces (APIs). A recent MIT Sloan report finds 83 percent of respondents to a survey view driving AI across the enterprise as a strategic opportunity. The challenge many of those organizations will face is mastering the tools and processes required to develop AI applications made up of microservices constructed using what will become thousands of containers.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1614 posts and counting. See all posts by Mike Vizard