Automation and Orchestration in a Container World

Since Docker began popularizing containers about five and a half years ago, this technology has become a key element of digital business transformation. Characteristics such as portability, highly efficient sharing of system resources and broad support have made containers an increasingly popular choice. In fact, in March 2017, a Forrester study found 66 percent of organizations that had adopted containers experienced accelerated developer efficiency, while 75 percent of companies achieved a moderate-to-significant increase in application deployment speed.

These positive impacts are accelerating container adoption; in SolarWinds’ recent “IT Trends Report 2018,” 44 percent of respondents ranked containers as the most important technology priority today, and 38 percent of respondents ranked containers as the most important technology priority three to five years from now. These industry statistics confirm that container technology interest and adoption is increasing with time.

To successfully deploy containers, technology professionals must understand the impact this technology will have on various aspects of the way in which their applications and infrastructure run, and uplevel both their skills and tools accordingly. Even the most nominally sized container deployment calls upon orchestration to help manage new aspects of this technology’s life cycle. With the rapid adoption of containers, container orchestrators saw a sharp rise in popularity as they streamlined management for IT administrators and broadly provided for the general caretaking (e.g., scheduling, service discovery, health checking, etc.) of a cluster of nodes (servers) running containers.

Deploying Containers: Impact on the Network and Security

The impact containers have on various aspects of an IT environment are in part contingent on the type of containerized workload deployed. To understand this better, let’s characterize two classes of containerized workloads being run as either a system container or an application container. System containers can be described as VM-like in nature, and generally contain a full operating system image and run multiple processes, while an application container is often lighterweight both in terms of footprint and number of processes running (ideally, only a single process). Both classes of containers use namespaces to deal with resource isolation and control groups (cgroups) to manage and enforce resource limits. However, the former lends itself to the containerization of pre-existing applications, while the latter is a best practices pattern for applications initially written to be run as a container.

So, how does this distinction materialize in the administration of IT environments? For example, when a system container is deployed, the network may not be a significant consideration when containerizing an entire system and treating it similarly to a VM. By contrast, using application containers to deploy microservices requires requests and application traffic to transit several containers and hosts over the network—potentially even several different networks—making it crucial to have a network monitoring system in place to track latency and ensure requests are addressed in a timely and effective manner. This is easier said than done; although implementing microservices can allow teams to iterate more quickly, they introduce several different components of an application and can be difficult to monitor.

As with the implementation of any new technology, security must be strongly considered. Vulnerability scanning and runtime protection should be weaved into the security practices of any organization deploying containers. Vulnerability scanning is necessary when various packages and libraries are built into each container, whether those are pulled from open source repositories, from internal code repositories or even if the container images are reused from public container registries. Security scanning must be completed to verify that there are no inherent vulnerabilities within a given image’s layers.

Whether containerizing an entire system or building an application container from scratch, static analysis of an image’s layers and its Dockerfile (a text document that contains all the commands a user could call on the command line to assemble an image) is key to identifying any vulnerabilities. In the second step of maintaining security, after verifying the container images have a clean bill of health and then deploying them, it’s necessary to implement runtime security. Containers can elicit abnormal behavior that could be caused by an administrator not adhering to operational best practices, or a malicious hacker that has penetrated a container. Runtime security helps ensure that once deployed, containers function within their intended bounds of operation.

With this in mind, how can technology professionals best manage container technology to reap the maximum benefits of leveraging it?

Mitigating Challenges through Automation and Orchestration

Leveraging automation and orchestration helps technology professionals save time and money, prevents issues, and ultimately deploys containers in the most effective way possible. Once a container deployment grows beyond a few hosts, technology professionals typically find the operational functionality provided by container orchestrators critical. Orchestrators commonly provide cluster management (host discovery and host health monitoring), scheduling (placement of containers across hosts in the cluster), service discovery (automatic registration of new services and provisioning of friendly DNS names) and so on. Orchestration is key to scaling deployments and to facilitating efficient collaboration between different engineering teams.

As container implementation becomes more mainstream, clear standards in container technology have begun to emerge, particularly for the foundational components of containers. The Open Container Initiative (OCI), for example, calls for creating open industry standards around runtime and image specifications to ensure vendors are able to guarantee and deliver on the promise of portability, allowing containers to be effectively shipped and interoperable across different systems.

Various tools and best practices in the realm of automation and orchestration can help facilitate successful container deployment and management. For example, leveraging a configuration management tool such as Puppet gives technology professionals a way to automatically inspect and deliver their software.

Additionally, treating infrastructure as code can help companies that are working toward faster deployments, as this method calls for managing the infrastructure with the same tools and processes that software developers use, such as automated testing, continuous integration, code review, and version control. These enable infrastructure changes to be completed more easily, rapidly, safely, and reliably. Understanding the type of (and extent of) management that a given container deployment requires is also essential. In some cases leveraging a solution such as Docker in swarm mode is sufficient, while others may call for a tool such as Kubernetes.

Successful Container Management and Deployment

In addition to leveraging automation and orchestration, technology professionals should develop new skills and leverage tools and services when implementing container technology in their organization. Here are a few tips to help facilitate successful container deployment and management:

  • Get certified in third-party tools: For NetAdmins, SysAdmins, or those who are “container curious,” getting certified in Docker and Kubernetes can help uplevel container management skills. Whether certification is achieved or not, merely studying the curriculum provides a helpful guide to aspects of container management that a tech pro may be otherwise unaware of.
  • Monitor as a Discipline (MaaD): Companies expect performance guarantees, cost efficiency, and service availability from their IT departments. One effective way to meet these requirements while using container technology is by leveraging monitoring tools. Actively tracking the activity in environments when application traffic is transiting the network is crucial and using a monitoring tool to set up automated alerts can also be beneficial when container deployments fail.
  • Conduct regular vulnerability scans: Organizations that choose to work with container technology will need to create a security framework and set of procedures that are consistently evaluated and updated to prevent attacks. Conducting regular scans of container images is key, as it provides visibility into their security, including vulnerabilities, malware, and policy violations. Even prominently and popularly used container images on public-facing container repositories are subject to being laced with vulnerabilities. Many container-related security vendors such as Twistlock and container registries offer help to identify issues introduced by these vulnerabilities. Container orchestration systems and container runtimes enable regular health-checks by probing the software inside containers to ensure that the application is still healthy and functional.

Words of Encouragement: Skill Up and Fear Not

With container use increasingly becoming mainstream, it’s important for technology professionals to embrace them now, and avoid letting containers happen to them. But fear not—for tech pros hesitant to implement this new technology, containers are not as foreign or intimidating as they seem. While distinct from virtual machines, administrators familiar with VM management will find many of these paradigms reincarnated. To get started, administrators can even experiment with containers by leveraging the technology on their personal laptops.

Implementing automation tooling and orchestration can enhance strategies and tactics to enable smooth container use and adoption, and several additional best practices can help ensure successful implementation and deployment. Beyond automation and orchestration, technology professionals without a background in software engineering should also look to learn scripting, as it will serve them well as deployments scale. Practice building efficient container images, using multi-stage builds, and sorting multi-line arguments alphanumerically is also key. Docker even offers best practices and methods for building container images.

At the end of the day, as emerging technologies with higher-level capabilities such as functions and serverless platforms, service meshes, analytics, and machine learning are poised to augment container technology in the coming years, it’s important for administrators to skill up now more than ever.

Lee Calcote

Lee Calcote is the Head of Technology Strategy at SolarWinds, where he stewards strategy and innovation across the business. Previously, Calcote led software-defined data center engineering at Seagate, up-leveling the systems portfolio by delivering new predictive analytics, telemetric and modern management capabilities. Prior to Seagate, Calcote held various leadership positions at Cisco, where he created Cisco’s cloud management platforms and pioneered new, automated, remote management services.

Lee Calcote has 3 posts and counting. See all posts by Lee Calcote