Spectro Cloud Makes Bare Metal Kubernetes Contribution to CNCF

It may soon get easier to provision and manage bare metal servers running Kubernetes following a contribution from Spectro Cloud to the open source cluster application programming interface (CAPI) being advanced under the auspices of the Cloud Native Computing Foundation (CNCF).

As a provider of a platform for managing fleets of Kubernetes clusters, Spectro Cloud has incorporated a metal-as-a-service (MaaS) interface originally developed by Canonical. Spectro Cloud’s contribution will make it possible to employ CAPI to provision and manage Kubernetes clusters on bare metal servers just like a cluster running Kubernetes on top of a virtual machine.

DevOps Connect:DevSecOps @ RSAC 2022

Spectro Cloud CTO Saad Malik says that approach eliminates the need to write complex scripts for bare metal servers in a way that also eliminates the 7% to 10% hit on processors and memory that hypervisors used to run virtual machines typically have on a cluster.

In addition, organizations that employ commercial virtual machine software can also stop licensing that software to streamline overall IT operations management by reducing the need to rely on IT experts that have virtualization skills.

Of course, in some organizations, the number of IT professionals that know how to provision virtual machines outnumber the ones that know how to provision bare-metal servers. In fact, in some cases it’s been so long since some IT teams have provisioned a bare metal server they have forgotten how to do it.

Nevertheless, the number of bare metal servers running Kubernetes is expected to steadily rise in the months ahead. Many organizations default to deploying Kubernetes on virtual machines simply because they lack the tools to deploy it any other way. As the tools for managing fleets of Kubernetes clusters become more mature, it’s now apparent that IT teams can deploy Kubernetes more easily on either virtual or bare metal servers.

In general, life cycle management of Kubernetes clusters is maturing rapidly as more centralized IT teams are exposed to Kubernetes clusters. There are still plenty of instances where developers might stand up a Kubernetes cluster on their own, but as more of those clusters are deployed in production environments, traditional IT operations teams are becoming more involved. In some cases, organizations have retained site reliability engineers (SREs) to manage those clusters while others are arming IT administrators with graphical tools that enable them to manage Kubernetes clusters without knowing how to programmatically configure each setting. Most of those teams, in time, will find themselves running a mix of clusters based on virtual machines and bare metal servers, some of which will be deployed in the cloud while others are located in on-premises IT environments. In fact, in many cases, the Kubernetes clusters will be collaboratively managed by both DevOps teams that have SREs as well IT administrators.

Regardless of how Kubernetes clusters are managed, the one thing that is certain is there will soon be a lot more of them running everywhere from the network edge to the cloud. The challenge IT teams are now faced with is how best to manage a highly distributed Kubernetes environment at scale.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1349 posts and counting. See all posts by Mike Vizard