D2iQ Extends Reach of Kubernetes Management Platform

D2iQ this week updated its Kubernetes management platform to make it simpler to manage multiple clusters deployed across a number of IT environments.

Version 2.4 of the D2iQ Kubernetes Platform (DKP) adds support for the Google Cloud Platform (GCP) alongside Amazon Web Services (AWS) and Microsoft Azure cloud platforms for deploying DKP and makes it simpler to provision Kubernetes clusters on the Microsoft Azure cloud.

In addition, D2iQ has added support for secure NVIDIA graphical processor units (GPUs) that can be deployed in a traditional on-premises data center or an air-gapped environment.

D2iQ is also making available a technology preview of an enhanced Insights Engine. The engine sends automated alerts that identify missing best practices and deprecated versions of application programming interfaces (APIs) in cluster configurations. The Insights Engine also scans to detect vulnerabilities in the installed container images to surface security issues and check for later versions of Helm charts.

Other additional capabilities include support for Kubernetes 1.24 and Red Hat Enterprise Linux (RHEL) 8.6 support, Rook Ceph storage services and enhanced Konvoy Image Builder (KIB) documentation for building virtual machines using that open source tool.

Finally, the enterprise edition of DKP now enables IT teams to manage all the Kubernetes clusters that are a part of a workspace with a single click using a web browser interface.

DKP is an integrated development environment (IDE) for managing Kubernetes environments using a set of GitOps workflows. However, the company provides a graphical tool in DKP Enterprise for IT teams that lack programming expertise.

Dan Ciruli, vice president of product for D2iQ, says managing fleets of Kubernetes clusters is now becoming a more pressing issue as the number of cloud-native workloads deployed continues to increase. Although Kubernetes clusters found their way into the enterprise several years ago, he notes that the pace at which workloads are being deployed is only now starting to increase substantially.

The challenge for organizations is the lack of Kubernetes expertise. Many IT teams are only starting to determine how best to manage fleets of Kubernetes clusters running everywhere from the network edge to the cloud. In effect, Kubernetes will soon be running everywhere as more applications invoke Kubernetes APIs, says Ciruli.

Less clear is how much the need to more efficiently consume IT infrastructure is influencing the growth of Kubernetes clusters designed to dynamically scale up and down as compute resources are consumed. However, it’s only a matter of time before more organizations realize that cloud-native applications are fundamentally more efficient, notes Ciruli.

In addition, it will soon become more practical to run legacy applications on top of virtual machines running on Kubernetes clusters using open source tools such as kubevirt and libvirt, he adds.

There is, of course, no shortage of options when it comes to managing Kubernetes clusters. The issue IT teams must come to terms with now is finding the right layer of abstraction to manage all those clusters, and do so in a way that is accessible to members of an IT team with diverse skillsets and expertise.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1617 posts and counting. See all posts by Mike Vizard