Orchestration and Application Performance: The Holy Grail of Containerized Microservices

Here’s your quest: to make sure you’ve got a handle on application performance and container deployment at the same time

‘I’m going to give you a quest!’ Yes, that’s right. You just left Camelot (the troubleshooting war room), and on the way back to your office, you had a vision of a quest—to find the best way to manage your containerized applications. The first thing that pops in your head is orchestration. And if you’re talking about orchestration, then you’re probably thinking about Kubernetes: the king of orchestration.

It’s exciting that Kubernetes became King of the Hill. The already super-fast adoption of container technology exploded as open source Kubernetes—and then the private K8s distributions—started to make good on the promise of gaining control over a fairly out-of-control containerized environment.

One after another, cloud providers announced that their orchestration choice was their own private distribution of Kubernetes: Red Hat OpenShift (and CoreOS Tectonic), IBM Cloud, Microsoft Azure, VMware, Pivotal Container Service, Oracle Cloud, Google Kubernetes Engine and, of course, Amazon Web Services. They all created their own distribution and standardized their orchestration on K8s.

Finally, there was a standard way to control this previously uncontrollable semi-ordered chaos that is containerized application infrastructure. Now, we had a method to understand when resources required another container to be spun up, and when they could disappear.

‘Run Away!’

Unfortunately, orchestrating containers is only (maybe “not even”) half the battle. Orchestration is good at taking the set of resources KNOWN to the system, watch what resources it KNOWS are being used and make infrastructure/container deployment decisions accordingly. (Note the strategic use of capitalization above). There are two concerns when it comes to managing your containers with K8s orchestration:

  • It can only make decisions based on what the system knows.
  • It doesn’t incorporate Application performance into the decision process.

Is Missing Data the Same as GIGO?

We learned early in our careers, especially the data scientists (what a cool modern title), that your analytics tools fall flat when the data is no good. In fact, the internet is full of articles that tackle the problem of cleaning data for analysis. Well, I’m going a step further. The biggest issue with Kubernetes (or any orchestration, for that matter) is that there are pockets of missing data—and that missing data, at times, can be just as damaging as bad data.

Which brings us back to issue No. 2 from above: application performance. Of course, application performance is the big giant hole in almost any infrastructure/platform management system in the last 20 years. That’s why the application performance management (APM) industry exists: because the J2EE middleware platforms had zero visibility into production applications.

Trying to manage application infrastructure without application performance information would be like trying to determine the average air speed of a swallow without knowing if it were African or European.

You Want Performance Data? ‘Then Bring Us a Shrubbery!’

Over time, platforms adjusted and new APM tools emerged to deal with the next generation of application technology, and the unique visibility (or observability) challenges presented by each. Containers are no different, as the popular and extremely successful tools from even the new APM vendors couldn’t perform their monitoring magic from within those darned containers.

And THEN we went and put another layer around them with orchestration!

So that’s why issues No. 1 and No. 2 go hand in hand; because without application performance information, it’s possible (maybe even probable) that your orchestrated application environment will put itself into a bad situation, just because it doesn’t know any better. And that’s when missing data becomes the same (or even worse) as garbage data.

The result is a too-common occurrence in IT operations of having all lights being green, while your customers/end users are experiencing service impacts.

‘WHAT is your Quest?’ Visibility—No, Observability

Which brings us to the world’s greatest non-debate debate in the world. No, it’s not “yellow or blue” (because the answer is so obviously blue). It’s the debate that seems to rage every few weeks.

Is visibility or observability more important for modern applications? 

And the answer is, “It doesn’t matter.” It doesn’t matter whether your developers instrument their applications to make them observable or you use a tool designed to automatically insert monitoring to provide visibility.

What does matter is that you get an understanding of how application performance (or, more specifically, the performance of services you provide to your users through your applications), but there’s a twist (isn’t there always?). Make sure that you have a way to take your performance information and your orchestration information and combine them (from a data management point of view, you’re correlating them) so that you can make orchestration decisions with an eye to service levels—PLUS you can see when applications are not being served, even when your orchestration engine believes everything is okay.

‘It IS the Rabbit!’

As application platforms and technology continue to evolve, the continuing task of figuring out how to gain that performance visibility is daunting, but it’s not the end of the world. Like it has before—first with J2EE, then with SOA, and now with microservices—tools and solutions emerge to help see inside applications and solve problems when they occur.

Whether you’re trying to figure out how to orchestrate 1,000 containers, understand how the 25 serverless functions in your environment are performing or just knowing how your overall application is delivering services to your users, there are solutions. Sure, they’re not as easy as the Holy Hand Grenade, but there are plenty out there that can deal with one little rabbit.

So here’s your quest: to make sure you’ve got a handle on application performance and container deployment at the same time. OF COURSE, IT’S A GOOD IDEA!

* In case you aren’t sure, I have based the theme of this article on the movie “Monty Python and the Holy Grail.” All quotes—both direct and implied—are absolutely intentional.

Chris Farrell

Instana Technical Director and APM Strategist Chris Farrell has over 15 years of experience in Application Monitoring and Management. As the first Product Management Director at Wily Technology, Chris helped launch the APM industry in 2000. Since then, Chris has led Marketing and / or Product Management for 5 other APM and Systems Management ventures, including solutions currently marketed by Microsoft, Computer Associates, Oracle and VMware. With BSEE and MBA degrees from Duke University, and experience as a ThinkPad Development Engineer and Manufacturing Engineer at IBM, Chris brings a unique perspective that includes a cross-section of both business and technical insights.

Chris Farrell has 1 posts and counting. See all posts by Chris Farrell