AWS Leverages Containers to Automate ML Model Deployments

Amazon Web Services (AWS) is leveraging containers to make it simpler to deploy machine learning models constructed using the Amazon SageMaker Studio Notebook tool in production environments.

Rather than setting up, configuring and managing a continuous integration/continuous delivery (CI/CD) pipeline to automate their deployments, AWS is making a case for using containers to enable data scientists to select a notebook and automate creating a job that can run in a production environment.

Amazon SageMaker Studio Notebook accomplishes that by taking a snapshot of the entire notebook and then packaging its dependencies in a container. That job can be scheduled to run and, upon completion, also automatically deprovisions the infrastructure employed to run that job.

Ankur Mehrotra, general manager for Amazon SageMaker, says the goal is to reduce the amount of time required to move a notebook into production from the weeks that are currently required to a few hours.

The Amazon SageMaker platform is a managed service that AWS provides to simplify the development of machine learning models that infuse artificial intelligence (AI) capabilities in applications. The platform spans everything from data preparation and governance to deployment. The models created by Amazon SageMaker are invoked via a standard set of application programming interfaces (APIs) that the managed service automatically creates, noted Mehrotra.

That approach also makes it simple for data science teams to update or replace models without disrupting application development workflows, he says.

The degree to which organizations will opt to rely on a managed service to build AI models will naturally vary. However, given the chronic shortage of AI expertise, it makes more sense for organizations to use a platform that automates many of the manual tasks required to build and deploy those models, says Mehrotra.

Most of those models are going to be stored in a repository provided by AWS, but there is a way to integrate Amazon SageMaker with a Git repository if an organization decides to standardize on a single repository for both its ML models and software artifacts, notes Mehrotra.

There’s little doubt that most applications will soon be infused with some type of ML model. The challenge is bridging the divide that currently exists between most DevOps and data science teams. ML models are subject to drift over time as new data is collected, so organizations are developing machine learning operations (MLOps) best practices to manage updates or replace models entirely when necessary. Naturally, those updates need to be aligned to any application updates in which an ML model is embedded.

It’s not clear how much MLOps and DevOps workflows might ultimately converge, but it’s apparent that the rate at which ML models are being created is starting to accelerate. In addition to automating development processes, many of the latest generations of models don’t require as much data to create them. As such, depending on the use case, ML models of varying sizes are now being deployed more frequently in production environments.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1615 posts and counting. See all posts by Mike Vizard