CloudPassage Promises to Secure Container Environments

The primary challenge with container security is that IT organizations need to address this issue at the image, container and host levels. CloudPassage has announced that it plans to do just that via a single integrated Project Azul platform.

Amit Gupta, vice president of product management for CloudPassage, says IT organizations going forward will need a platform that integrates container security at multiple levels to ensure their container environment remains sufficiently hardened.

Project Azul builds on the CloudPassage Halo security platform the company created for protecting workloads running on virtual machines (VMs) and hosts deployed in either a private or public cloud. Previously, CloudPassage extended CloudPassage Halo to enable IT organizations to define configuration policies, assess vulnerabilities, monitor changes made to those configurations, detect intrusions and deploy microsegmented firewalls to secure both the core Docker Engine and a Docker container. Currently in beta with a release scheduled for this winter, Project Azul will extend those capabilities across images, containers and the hosts they run on.

Gupta says the ephemeral nature of containers represents a major challenge in that most IT organizations don’t have tools capable of securing IT environments where workloads are continuously added and updated. Most IT organizations require security orchestration tools that can be employed within the context of a larger continuous integration/continuous deployment (CI/CD) environment, he says, adding containers running on both bare-metal servers and virtual machines are at the core of any DevSecOps initiative any organization is likely to implement.

Because IT organizations may not know for certain where any container or image is likely to deployed, CloudPassage is betting there will be demand for an approach that enables IT organizations to deploy a common security framework to both containers and virtual machines. Many of those containers are deployed on virtual machines or in a platform-as-a-service (PaaS) environments simply because any IT organization doesn’t have the tools to secure those containers running on a bare-metal server, even though that server would enable much higher utilization rates of infrastructure resources.

As is often the case with any emerging technology, many developers are not waiting for security technologies to catch up before embracing containers. That increased adoption of containers by developers winds up putting a lot of pressure on IT security professionals, who have limited to no visibility into those containers.

There’s no doubt at this juncture that some détente needs to be established in the months ahead between IT security professionals and application developers. Far too many IT security professionals are not even aware of what a container is, much less where they might have been deployed inside a local IT environment or in a public cloud. In the absence of any formal training most IT security professionals are naturally inclined to turn off anything that can’t be monitored. That, in turn, creates a recipe for conflict that would be unnecessary if everyone involved had a rational conversation about container security.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1457 posts and counting. See all posts by Mike Vizard