PipelineAI Leverages Docker to Simplify AI Model Development

PipelineAI is taking advantage of the community edition of Docker to help organizations develop artificial intelligence (AI) applications faster at less cost.

Company CEO Chris Fregly says PipelineAI Community Edition is free, public-hosted edition of the PipelineAI Enterprise Edition, with which developers can employ Apache Kafka streaming software to drive data in real time into an AI models built using Spark ML, Scikit-Learn, Xgboost, R, TensorFlow, Keras or PyTorch frameworks.

The PipelineAI platform makes use of graphical processor units (GPUs) and traditional x86 processors to host an instance of Docker Community Edition that makes available various AI frameworks that need to access data in real time, says Fregly.

Over time, most AI applications will need to access multiple AI models to automate a process. PipelineAI aims to reduce the cost of creating those AI models by making it less expensive for developers to determine which AI framework will work best during the life cycle of their AI application. The challenge with building AI models is that the algorithms need access to massive amounts of data in real time to identify patterns and determine appropriate responses. The PipelineAI Community Edition is intended to make it less expensive for developers to experiment with building those AI models by providing access to sample models and sample notebooks, says Fregly.

Fregly notes PipelineAI provides a platform for applying a DevOps process to the development of those AI models. The fact those AI models are built on top of Docker Community Edition means those AI models then can be deployed on any public cloud or on-premises IT environment.

Interest in developing AI applications has increased as the cost of computing and storing the data required to drive AI models has fallen considerably over the last several years. Many algorithms being used to drive those AI models are, in fact, several decades old, and only recently has employing those algorithms across a broad range of applications become economically feasible. The next challenge many organizations will face is developing DevOps processes to manage the life cycle of AI models that will need to be retrained frequently, as more relevant data becomes available. Most existing DevOps processes are optimized around developing applications versus teaching an AI model to learn and master a process.

In general, most AI applications will be employed to augment a function performed by a human. The day when an AI application can manage a set of integrated processes end to end without any human intervention is still far away. But as organizations become more adept at creating AI models, it’s only a matter of time before multiple AI models are linked together to drive a range of integrated services. The challenge organizations will face is designing and implementing the framework through which those integrated processes will be managed.

Obviously, it’s still early days when it comes to all things AI. But it’s already apparent that increased reliance on AI models is now more a question of where and how rather than if.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1605 posts and counting. See all posts by Mike Vizard