VMware Tightens Kubernetes Integration to Improve DevOps Workflows

VMware this week added a Virtual Machine Services for VMware vSphere 7 with Tanzu that enables IT teams running a distribution of Kubernetes from VMware to programmatically provision a virtual machine.

Sheldon D’Paiva, director of product marketing for VMware, says the offering complements an existing ability to programmatically invoke storage and networking services via the Kubernetes application programming interface (API). DevOps teams can now create Kubernetes namespaces via self-service.

Most instances of Kubernetes are already deployed on top of a virtual machine to ensure isolation between workloads. VMware vSphere 7 with Tanzu takes that deployment option a step further by embedding Kubernetes within the hypervisor of the virtual machine. In some use cases, VMware claims that approach results in workloads that are 8% to 10% faster than bare metal machines, mainly because of the scheduling software included within VMware vSphere 7.

VMware vSphere 7 with Tanzu is one of two main options for deploying Kubernetes that VMware provides. The other option is to deploy Tanzu as a standalone distribution of Kubernetes that can also be integrated with the VMware Cloud Foundation suite of infrastructure software.

 

VMware Tanzu

In general, VMware is trying to make its distribution of Kubernetes appealing to both IT administrators and DevOps teams that typically prefer to invoke an API rather than employ a graphical user interface (GUI) to manage Kubernetes environments. Over time, VMware expects DevOps teams will work collaboratively to manage Kubernetes environments using a mix of tools based on APIs and GUIs, depending on their level of programming skills.

In theory, at least, site reliability engineers (SREs) can programmatically manage IT environments at a larger scale than the average IT administrator. However, SREs typically earn a lot more than an IT administrator, and can be hard to find and retain. As a result, many IT teams are trying to strike a balance between hiring SREs and extending the capabilities of the average IT administrator using a variety of automation platforms that can be accessed via a GUI.

At the same time, IT teams are evaluating to what degree machine learning algorithms and other forms of artificial intelligence, otherwise known as AIOps, might further automate IT. Right now it’s not clear to what degree IT teams may have to migrate to a new platform versus waiting for IT vendors that provide IT management platforms to slipstream AI capabilities in a future release of a platform they already employ.

Regardless of how IT is managed, the level of complexity IT teams are being asked to manage is only going to increase. Emerging microservices-based applications, based on containers running on Kubernetes clusters, are being deployed alongside legacy monolithic applications running on virtual machine platforms such as VMware vSphere. In some cases, those applications are deployed in the cloud, while in others they are running in on-premises IT environments. IT teams, over time, will look to centralize the management of those different platforms as part of an effort to reduce the total cost of IT.

In the meantime, however, IT teams should expect to find themselves managing extended enterprise IT environments that are becoming more complex with each passing day.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1620 posts and counting. See all posts by Mike Vizard