Best of 2019: Kubernetes Without Scale: Reasons to Run a Personal Cluster, Part 1

As we close out 2019, we at Container Journal wanted to highlight the five most popular articles of the year. Following is the first in our weeklong series of the Best of 2019.

Kubernetes is primarily thought of as a way to manage and scale production deployments. Typically it’s used by large development teams (teams with more than a dozen engineers, for example) as they outgrow simpler solutions such as Heroku or raw EC2 instances. The smallest production-grade Kubernetes clusters tend to run across six different machines—three to run the core Kubernetes infrastructure and three to run your application workloads.

For the average individual developer, however, this is overkill. You’re just trying to host your blog, your resume and maybe a side project or two. You don’t need high redundancy. You don’t need to scale up and down with huge bursts of traffic. You don’t need five nines of uptime. So why would you use Kubernetes?

The answer boils down to one word: ecosystem. With so many teams deploying both internal and third-party apps on Kubernetes, the open source community has developed fantastic solutions for managing and configuring software within a cluster, at any scale.

What’s more, you can take advantage of this tooling without spinning up a fleet of servers. In this post, we’ll run Kubernetes on a single t2.small EC2 instance, which, with 2GB memory and a single CPU, runs about $15/month.

But first, to highlight the advantages, let’s look at the process for hosting your own instance of Ghost, an open source blogging platform, with and without Kubernetes.

Installing Ghost With Kubernetes

Once your cluster is set up for the first time, you can install Ghost like this:

And that’s it! We’ll also see a better way of managing all these flags below.

Installing Ghost Without Kubernetes

Ghost has some great documentation on how to get up and running with its self-hosted blogging platform. But if you want to go this route, you should be prepared for some bumps in the road.

The first thing to notice is that the Ghost documentation assumes you’re running Ubuntu 16.04 or 18.04. If you’re on another flavor of Linux, expect to hit some issues. Note that we didn’t have this problem when running with Kubernetes, since it uses Docker under the hood, allowing Ghost to provision any Linux distro it wants.

The next thing to notice is the list of prerequisites:

  • NGINX (minimum of 1.9.5 for SSL)
  • A supported version of NodeJS
  • MySQL 5.5, 5.6, or 5.7 (not >= 8.0)
  • Systemd

You’ll need to visit the documentation for each of these projects and work through their installation process. And of course, they may have their own prerequisites in turn.

Again, with Kubernetes, this isn’t an issue. The Dockerfile and Helm chart for Ghost ensures it has all the prerequisite software it needs, at exactly the versions it expects. Magic!

Finally, you can start walking through Ghost’s installation documentation. First, it will help you set up your MySQL and NGINX configurations. Then you’ll install Ghost’s CLI tool, which will walk you through a setup wizard that asks for some important configuration parameters (most of which we specified in the helm install command when using Kubernetes).

So that was a bit of a pain, but it’s a one-time cost, right? Once Ghost is set up, you’re done. But what about migrations and upgrades?

Upgrades and Migrations With Kubernetes

On Kubernetes, if we want to upgrade Ghost or move over to a different server (maybe EC2 is getting expensive and you want to try GCP), all you need to do is re-run the helm install command. Furthermore, Kubernetes has some great tools for saving all those flags as YAML files, which can be checked into a Git repository and saved for future use. Then when it comes time to upgrade, you don’t have to remember the magical incantation that got it working the first time. This practice is known as infrastructure as code, and it is a core feature of modern DevOps practices.

Let’s use Reckoner to put our Helm installation in a single file, course.yaml, which can be stashed in version control for future reference.

If you want, you can also specify the exact version of Ghost you want to install:

Once you’ve set up your course.yaml, you just need to run

 

 

And that’s it! If you want to upgrade Ghost or install it exactly the same way on another machine, just download your course.yaml and run reckoner plot course.yaml

Upgrades and Migrations Without Kubernetes

If you’re not using Kubernetes, you’re probably not using infrastructure as code. This means that if you want to reproduce your Ghost installation on a new server, you’ll have to follow the instructions from scratch.

If you find you’re doing this a lot, it may be worth condensing the instructions into a bash script that you can run with a single command. But keep in mind that each time you run the script, your environment will be slightly different—you cannot step in the same river twice. The script may or may not work, depending on the assumptions it makes.

To upgrade Ghost, it should be as easy as ghost upgrade (actually, a little more concise than reckoner plot course.yaml!), but you risk some of the same headaches as above. If the upgrade doesn’t work, suddenly your server is in an undefined state and you may not even be able to get back to where you were before.

Again, with Kubernetes and infrastructure as code, these problems disappear. Because each installation and upgrade is done in a fresh environment, you know the exact state of the server, and the maintainers have tested that environment to ensure it works as expected.

Getting Started

So you’re convinced that Kubernetes might be a good way for you to manage your applications, even at small scale. In part two of this series, we’ll see what it takes to get set up with a minimal Kubernetes cluster.

Robert Brennan

Robert Brennan

Robert Brennan is director of open source software at Fairwinds, a cloud-native infrastructure solution provider. He focuses on the development of open source tools that abstract the complexity from underlying infrastructure to enable an optimal experience for developers. Before Fairwinds, he worked as a software engineer at Google in AI and natural language processing. He is the co-founder of DataFire.io, an open source platform for building API’s and integrations, and LucyBot, developer of a suite of automated API documentation solutions deployed by Fortune 500 companies. He is a graduate of Columbia College and Columbia Engineering where he focused on machine learning.

Robert Brennan has 3 posts and counting. See all posts by Robert Brennan