5 Key Considerations for Managed Kubernetes

Is managed Kubernetes a better fit for your organization than in-house Kubernetes?

While deploying Kubernetes on your own hardware and running your own local cluster computing is possible, providing your own cloud-native capacity involves dealing with a lot of complexity. Using Kubernetes is inherently automated and, relative to what it accomplishes, fairly simple—its operational simplicity is the point of having Kubernetes in the first place. Running the Kubernetes deployment itself, though, is difficult.

It’s not surprising, then, that many organizations decide to run their container workloads on a managed Kubernetes service. There are many such services from which to choose, and—especially for those new to using Kubernetes—there’s a lot to be said in favor of having someone else dealing with maintenance and configuration issues.

If you’re in the middle of a “build versus rent” debate, here are five broad points to bear in mind when selecting a managed Kubernetes service:

  1. Trying all the free services is an excellent idea. Most competitors in this space offer options for free trials. Using these trial options is without a doubt the best possible way to see what the services really do. Sure, all these services offer Kubernetes plain and simple. At the same time, they offer a wide variety of tools for managing your cluster and some of them offer features that differentiate them from the pack. Given that container orchestration is a technology that’s changing fast, the capabilities each service offers can evolve in the blink of an eye. Using the version of a managed service is the only reliable way to know exactly what the current offering actually does. You may also want to test the degree to which it’s workable to remain “loyal” to the cloud provider you are already using—Azure offers a Kubernetes service and will obviously be attractive if you’re already happily using Azure, but you may find other services provide benefits that call for some soul searching.
  2. Make sure you understand a potential service provider’s redundancy approach as well as how they support troubleshooting. Not every service offers the same approach to high availability in master nodes (some you directly spin up yourself, while in other services the master nodes are off-limits). Some services can offer more geographical spread than others when it comes to distributing slave nodes for failover redundancy. Plus, each service makes different kinds of troubleshooting tools and performance logs available. It isn’t that one approach is necessarily better or worse than another, but you don’t want surprises after you commit and have production code up and running.
  3. Understand any extra costs for high availability. While it’s true that Kubernetes is Kubernetes, it’s also true that providers handle master nodes in different ways. For production environments, you’ll want at least three master nodes running, quite possibly more. Some providers create multiple masters behind the scenes and treat them as part of the basic cost of running your cluster. Others charge for all nodes, including the masters. There are good reasons behind both approaches, but a per-master charge means that smaller clusters in particular will be relatively more expensive.
  4. Determine your commitment (if any) to multiple public clouds. What you might call a “typical” managed Kubernetes service runs within the environment of a more generic public cloud infrastructure. Use Google’s managed Kubernetes, for example, and your nodes will run (not surprisingly) within Google Cloud; the Amazon offering runs within AWS; and so on. If you want to run nodes on more than one service (or in a hybrid configuration), you’ll need to look at service providers that expressly support this, such as OpenShift or Heptio Kubernetes Subscription (HKS). There’s more complexity to such a setup, but there are nice advantages in terms of security and fault tolerance.
  5. Turnkey is also an option. You may be resistant to the idea of handing the keys to the cluster over to the service provider. Turnkey services such as Stackpoint or Containership Kubernetes Engine (CKE) take a “push-button” approach to deploying your own cluster on a public cloud and can serve as a kind of “best of both worlds” approach to creating your cluster. Note, though, that you’ll need to understand what’s involved in maintaining and troubleshooting this sort of cluster, because by design it’s not maintained and updated in the same way as a managed service. These services are near cousins to managed service providers and are worth a look as you determine the best fit for your specific needs.

And here’s some good news: Because there’s consistency across Kubernetes managed services, containers developed to deploy while using a managed service can be migrated to home-grown deployments seamlessly. You can, in other words, keep complexity down to a minimum and get expertise with how things are supposed to run, then see later whether you want to develop sufficient in-house capabilities to run a Kubernetes cluster on your own.

Robert Richardson

Robert Richardson

Robert Richardson has years of experience in software development and in writing about technology–cyber security, cybersecurity and AI in particular. He served as editorial director of TechTarget’s Security media group until late 2018. Prior to this post, he was editorial director at Black Hat, developing online products for the highly successful computer security conference. He spent several years as the director of the Computer Security Institute, where he was on staff from 2003 through early 2011. In that capacity, he’s given keynote presentations on three continents, often speaking about the CSI Computer Crime and Security Survey done in conjunction with the FBI, an undertaking he directed for several years. Prior to CSI, he was senior editor of Communications Convergence magazine for two years, where his beats included telecom security, wireless, Internet messaging and next-generation phone systems. Robert started out his career as a systems-level programmer developing early PC network applications, leading to his writing articles that saw him become a frequent contributor to magazines and Web publications such as Ziff-Davis Internet Computing, BYTE, Network Magazine and Small Business Computing. On occasion, Robert has also taught introductory courses in computer science at Swarthmore College.

Robert Richardson has 1 posts and counting. See all posts by Robert Richardson