Microsoft’s Azure Arc Now Includes Machine Learning Services

Microsoft, at its online Ignite 2021 conference this week, extended the reach of its Azure Arc framework for managing Kubernetes environments to include artificial intelligence (AI) workloads deployed on edge computing platforms.

Azure Machine Learning services, used to build, deploy and manage machine learning models, have joined the portfolio of data services that Microsoft is now making accessible via Azure Arc.

Azure Arc is designed to support any distribution of Kubernetes, and has emerged as the primary mechanism through which Microsoft will extend its reach beyond platforms running the complete stack of Azure software. It creates a managed identity for each Kubernetes cluster within the Azure Portal by assigning an Azure Resource Manager ID. Kubernetes clusters can then be attached to standard Azure subscriptions, participate in a resource group and be assigned tags like any other Azure resource.

Connecting a Kubernetes cluster to Azure requires an administrator to deploy agents, which run in a Kubernetes namespace dubbed azure-arc. Those agents not only connect a Kubernetes cluster to Azure Arc, but also collect logs and metrics in addition to monitoring configuration requests. IT teams can also apply policies to any Kubernetes distribution using the Azure Policy for Kubernetes service.

Scott Guthrie, executive vice president of the Cloud and AI group at Microsoft, told conference attendees that Azure Arc is the management framework that is at the core of the Microsoft approach to hybrid cloud computing.

In general, organizations are shifting application code to the network edge to process and analyze data as close as possible to the point where it is created and consumed. As organizations look to automate business processes, it will become increasingly important to deploy AI models on those edge computing platforms.

Guthrie said within five years, it’s not likely there will be any physical devices that don’t have an IP address that won’t be connected, in some way, to a cloud service. As such, Microsoft is now making it clear that its overall AI strategy will include on-premises platforms running at the network edge.

As hybrid cloud computing continues to evolve, it’s clear application workloads will be running on various classes of edge computing platforms as well as in local data centers, on network services such as content delivery networks (CDNs) and multiple public clouds. Platform vendors from across the IT spectrum are now racing to provide the management framework through which all those workloads might be centrally managed. In effect, the same operating model applied to workloads running on public clouds will soon be applied to workloads running anywhere. There may even come a day when there are more application workloads running at the network edge than there are in the cloud.

In the meantime, IT organizations may soon find themselves rationalizing multiple existing management frameworks, as it becomes simpler to extend cloud-based frameworks that are, essentially, platforms managed by a provider, such as Microsoft. It will be up to each organization to ultimately decide how closely to manage the infrastructure on which their applications are deployed. But as the number of those workloads being deployed across an extended enterprise continues to increase, the less feasible it becomes for an internal IT team to achieve that goal on their own.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1612 posts and counting. See all posts by Mike Vizard