August 22, 2017

We are all familiar by now with the promised benefits of deploying applications with Docker containers. These include:

  • Significant scalability and agility improvements,
  • Predictable performance and resource isolation, and
  • Application portability and rapid deployment.

However, as more companies move to deploy more business-critical applications using containers, it’s important not to reduce containers’ benefits by complex management and container security requirements.

Containers are being used extensively in test environments and to improve the QA process for application deployment. But production deployments of containers for business-critical services is in its infancy. As more companies migrate existing applications to containers or deploy new customer-facing application containers, there must be a greater understanding of how to manage and secure running containers.

CartoonOne thing to avoid is burdening the production deployment of containers with complex configuration and people processes. Containers bring a new world of rapid development and deployment and scaling production to meet runtime demands. It would be a shame to lose many of those benefits by forcing delays due to process reviews and security updates.

To match the speed and agility of a continuous improvement/continuous delivery (CI/CD) process with the required management and security controls, traditional tools must be adapted to work seamlessly in a containerized environment.

Here’s a list of just some of the things that will need to be adapted in this rapid-pace environment:

  • Scanning the images in registries for vulnerabilities
  • Controlling access to registries
  • Controlling network access to containers by hosts, applications, or other groups
  • Configuring the host operating system to properly lock it down
  • Creating security rules for network and application layer communication
  • Updating security rules as new containers spawn or are torn down, across hosts or data centers
  • Scanning newly running containers for new vulnerabilities
  • Detecting real-time threats to containers in production

Traditional cloud security has been focused on protecting the data center boundary, network, virtual machines, applications and user data. Security solutions are deployed at the entry as a a gate to a data center or inside the data center to achieve these protections. But the modern private, public or hybrid cloud is a dynamic environment that changes by service and application needs.

Cloud management tools are making these changes easier. Automation tools are used to ease these changes. That said, today most security solutions need to be  manually configured when changes happen inside the data center. For example, when new database server VMs are created, operation engineers need to modify their automation scripts or configure the management system to link them into the existing workflow.

But what about security? Similar changes will need to be taken care of. For example:

  • Security policies needs to be adjusted, then applied to the new servers
  • The network needs to be configured or adjusted to allow and accept these changes
  • Tests are often done in a staging environment
  • When the same servers are moved into a production environment, reconfiguration or even retests are needed to ensure they pass all security checks.

All of these operations could slow down deployment dramatically. So should you ignore these changes and put your head in the sand to avoid them?

New Environment, Same Security Issues

In the new container-based environment, which brings the pace of management and scalability to a new level, most of the critical security issues that existed in VMs still exist. As the Gartner group mentioned in a research note this year, containers used as a wrapper around applications help improve security because of encapsulation and isolation.

But containerization also brings new security challenges. For example, internal application communication now will be exposed to the data center network because of a microservice architecture, especially if containers span hosts or data centers. What was originally in-memory messages now become RESTful types of network payloads. And in this process, critical information can be exposed to outside world through the network.

In addition, lightweight containers can be launched and stopped in a super fast way, which makes the cloud environment much more dynamic than before. A container’s life cycle is much shorter than virtual machine. So it can be a new challenge for security solutions to fit into this hyperdynamic changing environment.

Just as for the attack surface of virtual machines and dedicated servers before that, traditional security controls will need to be adapted and applied to containers. This includes scanning for vulnerabilities, detection of anomalies or violations of allowed behavior and protection against real-time threats to applications or hosts. But all of this needs to happen in a new environment where new images are being released constantly and containers are scaling up and down to meet demands.

One thing is certain, processes and tools will adapt to these new requirements. The rapid and continual change in these environments will mean that management, monitoring and security products will need to learn service behavior and adapt to these changes automatically, because the business just does not have time to wait for manual configuration and testing of updated policies and rules.

About the Author / Fei Huang

Fei-HuangFei Huang is the CEO and Co-Founder of NeuVector Inc. NeuVector delivers a container-native solution for securing running Docker containers. Fei has over 15 years of experience in enterprise security, virtualization, cloud and infrastructure software. He has held engineering management positions at Vmware, Cloud Volumes, and Trend Micro and was the co-founder of DLP security company Provilla.

Lorum Ipsums asdfasdfasdf