As a growing number of organizations deploy services and applications on Kubernetes, hybrid and multi-cloud environments are also seeing increased rates of adoption as well as an interest in serverless technology.
A survey of 200 organizations in the U.S., commissioned by Cockroach Labs and Red Hat, finds nearly half (46%) of overall respondents chose transactional workloads as their primary concern related to effectively architecting for and deploying with Kubernetes.
Concern Over Complex Migrations
Complex migration of legacy workloads was seen as another priority concern, with 44% of survey respondents voicing concern over challenges around migrating those workloads to their new distributed application architecture.
“While the survey data shows a majority of companies manage their own infrastructure, we have also heard, organically, that it is getting increasingly difficult to hire these resources in-house,” Jim Walker, principal product evangelist at Cockroach Labs, says. “This presents a fairly big risk, especially as they grow their usage of Kubernetes.”
Walker notes that, within the report data, it was interesting to see a nearly even split when it comes to the size of ops teams: Half of those surveyed share infrastructure management across multiple teams. The other half trust these duties to a single dedicated team (54%).
“Taking that a step further, when we examine these two factors together, we find the distribution of responsibilities—a dedicated DevOps/SRE team versus cross-functional teams—is consistent whether the teams involved are supporting many workloads or just a few,” he says.
For those running their own in-house DevOps/SRE team, comparing the data shows that the number of workloads directly correlates to the size of that team.
Smaller Loads, Smaller Teams
“It is only logical that smaller loads equal a smaller dedicated team, larger loads equal larger teams,” Walker explains.
Rich Lane, chief strategy officer at Netenrich, says generally when the application goes to production it is often supported by a traditional operations team.
“Overall, though, the application itself represents such a small portion of the overall hybrid environment they support there is no gain to be had in outsourcing management of these services,” he says. “On the other hand, a small startup or SMB may have structured itself from the get-go to have DevOps engineers own support for what they build.”
From Lane’s perspective, there isn’t any risk to either model from servicing the application in-house, unless there simply is no expertise that can be tapped internally for support.
“Essentially, the largest two concerns are: How will we find a performance problem when running in production? And how do I know my containers are secure? These are both often overlooked during the design phase of an application and then raised as issues during the production turnover.”
Lane says firms should start from the beginning with evaluations of monitoring tools that can cover Kubernetes and microservice environments, but also integrate with tools and technologies that comprise the rest of the IT real estate.
“The same goes for security,” he says. “Having a RASP-based security tool is a must. Teams must also decide who owns container security and what the policy is for container security in production.”
Nathan Demuth, senior director of cloud services at Coalfire, says there are two aspects of container and orchestration security that organizations need to address: The first is the containers themselves, and the second is pipeline and orchestration.
“Containers themselves need to be hardened and scanned, in development and potentially even in runtime, and they should also be required to authenticate, a common step skipped by developers,” he said. “As for the pipeline and orchestration, containers need to be pulled from reputable sources; stored in secured repositories; tagged and signed with trust certificates, and when new versions become available, outdated versions archived from the repos.”
He recommends orchestrators be evaluated for least-privilege configurations to ensure that movements within CI/CD are authenticated, logged and monitored.
Tying Kubernetes to Workloads
The survey found almost all respondents were unanimously turning to Kubernetes to orchestrate their production workloads, with a close to equal split between companies running a small number (five or fewer) of their applications and services on Kubernetes versus companies running at least triple that load (15 or more).
“While that may not be entirely surprising, what is interesting is that this split could represent an adoption journey,” Walker says. “Companies are likely to start off with a smaller number of applications or services on Kubernetes as they establish a comfort level in working with the complexities of distributed applications.”
He explains that, once they are successful, adding more workloads becomes much easier, leading to accelerated growth and an increase in investment towards evolving and improving their stack, birthing a new standard and set of tools for managing data. It represents an economy of scale.
Walker notes that serverless and the buildup around these concepts is gaining momentum because it represents a fairly large and more efficient shift in the way we not only manage workloads, but more importantly, how we consume and pay for them.
“We’re believers in this future because, from our foundation, it has been our goal … to build a database that is nothing more than a SQL API in the cloud, exposed by endpoints all over the planet,” he says. “It is a model where you never consume what you don’t use. Serverless is right, and we feel this is the ultimate delivery model for nearly all applications.”