As we close out 2021, we at Container Journal wanted to highlight the most popular articles of the year. Following is the twelfth in our series of the Best of 2021.
Even before the global pandemic turned everything upside down and forced enterprises to accelerate digital transformation, many businesses were already shifting toward implementing Kubernetes and containers. For some, moving to Kubernetes and containers was the logical next step. Others have yet to reap the rewards. Regardless of their success, every business was after the same results: they wanted to accelerate app development and speed time-to-market by taking advantage of the benefits of modern application architecture.
Kubernetes and containers make businesses grow faster—but paradoxically, successfully implementing these technologies is slow and complex. For one thing, there’s inherent complexity: likely, you’ll end up with a broad proliferation of Kubernetes clusters—some on-premises, some with cloud providers, with different teams driving those deployments. This potentially difficult-to-manage situation makes other critical operations harder, such as tooling for security.
Those wanting to dive into Kubernetes often choose to build something from open source components; they are readily available and already vetted by the community. Generally, the idea is to create something custom-made and just right for their business. But like getting a suit tailor-made from scratch, few seem to understand that DIY is a long-tail effort.
The push to go with DIY Kubernetes can also come from lines of business’ DevOps teams wanting to perform an end-run around IT (and its more regimented procedures). At first glance, this route seems more straightforward, less bureaucratic and ultimately faster. However, teams will need to consider how to manage issues such as governance, security, ongoing operations and costs tied to this approach.
Others may be reluctant to commit to Kubernetes implementations. They worry they’re headed down the same thorny path as OpenStack where technologists struggled with complications, spent a year or two getting it to work—only to have its benefits surpassed by other means. These technologists were burned and are understandably hesitant to step into another mess. But there’s this: whether they’re the sort who jump into the deep end of the pool or if they wade in slowly, both varieties of tech pro want to reap the benefits of Kubernetes as quickly as possible.
How quickly should you implement Kubernetes? Well, to determine this, first, ask yourself a question; how are you spending your calories? Or, to put it another way, how well are you expending your time, resources and effort to get basic infrastructure out of the gate? Then ask yourself if you are working at creating business value.
The good news: there is a faster way to get Kubernetes and containers up and running. This method is also much more accessible, more resilient and better at helping you to meet your business goals.
A Unified Platform
When you divorce technology from any business requirement, you open yourself to adopting technology for its own sake. Never misalign what your tech is doing from your ultimate business goal. When you do this, you set yourself up for boondoggles such as the sunk-cost fallacy.
Most of us tend to think of new technology as an opportunity to have a clean slate when that’s rarely the case. We may have the latest tech on hand, but we still have monolithic business-critical applications, on-premises servers and multiple cloud platforms—and the operation’s overall business needs. Additionally, there are always some dependencies to consider, whether it’s the identity system or a database. We’re also almost always modifying and modernizing an existing app. In effect, we are usually building something new on top of something old.
Beyond just getting started with Kubernetes, there’s also an arguably even more significant challenge: bringing the implementation to scale so it can meet the production-level requirements of the enterprise itself. The fastest way to implement Kubernetes and containers at scale is to opt for virtualization, with Kubernetes built-in.
Virtualization can provide a unified platform for managing both virtual machines and containers in a single infrastructure stack. Applications can then be deployed using any combination of virtual machines and containers.
When Kubernetes is built in to virtualization—when it’s native to the platform itself—businesses can consolidate their modern and traditional application environments into a single stack. Operations can leverage their existing technology, skillsets and processes while also building for the future.
Because the platform centralizes everything on a unified platform, developers and IT admins can come together to build, run and manage modern applications. Teams can attach policies for an entire group of virtual machines, containers, and Kubernetes clusters. IT administrators can perform these operations from a familiar interface while providing the security and resource isolation that modern applications need.
Virtualization can reduce operational overhead by automating the deployment of Kubernetes clusters, along with ongoing Day 2 operations such as backup, migration, patching and monitoring. But when you opt for a typical homegrown model, solutions to these problems must be custom-built. This slows time-to-value and increases cost.
Virtualization also offers a better developer experience—which, according to McKinsey, fuels business performance. When developers have self-service access to the virtualization infrastructure via a unified and native Kubernetes API, they can quickly provision Kubernetes pods, namespaces, clusters, VMs and even developer services, such as databases and S3-compatible object storage. The virtualization layer can provide better flexibility and workload mobility, API-driven automation and speed to support developer self-service access.
With Kubernetes up and running via virtualization, the platform handles any resiliency issues. Typically, the virtualization platform restarts a failed or problematic Kubernetes node before Kubernetes detects the problem. Virtualization plus Kubernetes also provides the control plane’s availability by utilizing mature heartbeat and partition detection mechanisms to monitor servers, Kubernetes VMs and network connectivity. With proactive failure detection, live migration, automatic load balancing, restart due to infrastructure failures and highly available storage, users can prevent service disruption and performance impacts.
Virtualization delivers performance, too; it can even exceed bare-metal performance through powerful resource management and NUMA optimizations. Configurations can empower performance-sensitive stateful applications by enabling direct access to underlying direct-attached storage hardware. Distributed resource schedulers balance efficiency and performance for any workload in the cluster. It reduces resource waste while allowing for higher resource utilization of the underlying infrastructure.
All of this can lead to lower costs. Virtualization infrastructure can deliver the lowest overall TCO through capex savings from higher resource utilization and opex savings due to simpler management.
Virtualization with Kubernetes meets organizations where they are. Other solutions—whether bare metal homegrown or from the cloud—require a great deal of change and adjustment to the operational tooling, processes and teams—which leads to slower adoption and increased cost. Meeting businesses where they are allows them to derive value quickly.
Virtualization supports evolutionary change. This allows businesses to get Kubernetes up and running quickly—while they modernize more broadly at their own pace—and there’s plenty of business value in that.