Containers Creating Major DevSecOps Challenge

Cyberattacks against containers have moved from being a potential cause for concern to an issue that will have a much greater material impact on the rate at which cloud-native applications are being deployed. IT organizations are clearly anxious to develop microservices-based applications on containers, especially when they are core to digital business transformation initiatives. However, as more applications attain true mission-critical status, the more troubling container security issues become.

Fortunately, most of the instances involving container security breaches today involve what often are viewed as nuanced attacks involving cryptojacking. According to a recent report published by Aqua Security, 95% of the container images it discovered had been compromised were designed to hijack resources for the sole purpose of cryptocurrency mining. Cryptojacking may be considered the IT equivalent of a victimless crime in that abundant compute resources of cloud service providers are employed for an illicit purpose.

However, Asaf Morag, lead data analyst for Aqua Security, notes it’s not that far a leap for what appears to be just another innocuous container to have a much more lethal payload.

Organizations of all sizes have been pushing responsibility further left as part of an effort to enlist developers to prevent that from happening across all their applications. The challenge is securing containers requires a lifecycle management approach that is both cumbersome to implement and challenging to maintain.

In theory, organizations are asking developers armed with tools to scan their code for vulnerabilities. However, that’s only the start of the process. DevOps teams are then expected to scan containers runtimes and the hosts they run on for vulnerabilities as well. While that may not seem much different than how monolithic applications are secured, the rate at which the containers that make up a microservices application is usually several orders of magnitude greater than updates made to monolithic applications. Each of those containers and the platforms on which they run need to be scanned every time there is an update.

More challenging still, new vulnerabilities might be discovered in existing container applications at any moment, which means DevOps teams may find themselves rescanning long-running containers and platforms that once thought to be secure.

Brandon Lum, an IBM researcher and technical lead for the security interest group for the Cloud Native Computing Foundation (CNCF), says a different security mindset is required. A more agile process is required because containers are moving, which requires a lot of testing infrastructure to be in place.

Tensions between developers and security teams are nothing new, but containers are clearly exacerbating the issue. A new container security whitepaper just published by the CNCF SIG even goes so far as to note “it’s unreasonable to expect developers and operations to become security experts.” It’s little wonder why so many organizations are constantly weighing the costs of implementing container security against the actual risk. In fact, the CNCF whitepaper notes, “With the rapid onset of modern methodologies and better alignment of IT activity with business needs, security must be adaptive, commensurately applied and transparent.”

“Commensurately applied” is, of course, determined by the level of risk an organization is willing to assume. In fact, it’s not unheard of for some organizations to assume that a container with potential vulnerabilities will be ripped and replaced long before there is a real chance it might be compromised. The assumption is the rate at which applications can be deployed and update to advance a business goal trumps container security concerns. If that wasn’t the case there would be millions of containers already deployed in production environments despite known vulnerabilities.

Security professionals, of course, are trying to minimize the number of potential container vulnerabilities that might exist in a production environment by a combination of policy and fiat. It’s possible to attach policies that prevent a container from running if it doesn’t meet specific requirements. The issue is, when push comes to shove, most organizations will decide business necessity outweighs security concerns.

The one exception to that rule is government agencies such as U.S. National Security Agency (NSA) According to Emily Fox, DevSecOps lead for the NSA, developers of yesteryear who don’t adhere to best DevSecOps practices will not be allowed to work for most organizations.

Container security lifecycle management may one day soon be more attainable for those organizations with the help of artificial intelligence (AI) and automation, but as the state of DevSecOps stands now in most organizations, it appears any prediction this side of 2025 is a tad optimistic.

In the meantime, the use of encrypted containers along with other new tools that promise to make containers more secure should increase. The challenge is figuring a practical way to apply them.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1612 posts and counting. See all posts by Mike Vizard