Anchore Service Provides Means to Ensure Container Image Security

Most IT operations teams are pretty particular when it comes to deploying any kind of image in a production environment, so when it comes to images that run on containers it’s understandable that enthusiasm for deploying an image that has not been vetted isn’t especially high.

To give IT operations teams more confidence in both the stability and security of Anchore Inc. this week announced that it has begun beta testing a software-as-a-service (SaaS) environment that IT operations teams can use to certify, inspect, and synchronize container contents.

Scheduled to be available in the second quarter of this year, Anchore CEO Saïd Ziouani says many IT organizations that have begun to adopt containers find themselves initially standing up their own registries to keep track of container images. But over time many of them find this task to be more of a chore than they might have initially bargained. Anchore will provide a service through which IT operations teams can maintain quality control over images without having to invest in setting up dedicated IT infrastructure to enable it, says Ziouani.

Designed to be compatible with public container registries such as DockerHub as well as container platform such as Kubernetes, Mesos, CoreOS, ECS, and Google Container Engine, Ziouani says Anchore is squarely focused on what is occurring inside the container itself rather than everything around it. What sets Anchore apart is that it provides a mechanism for container quality control that doesn’t sacrifice the simplicity and agility promise of containers. As a SaaS environment developers can easily load images into Anchore from wherever they happen to be located regardless of whether they work for the internal IT organization or a third-party development organization.

Anchore is founded by Ziouani and Tim Gerla, both of which were founders of Ansible, the open source IT management framework that was eventually acquired by Red Hat. The other founder of Anchore is company CTO Dan Nurmi, who was also a co-founder of Eucalyptus Systems, developer of a private cloud platform that was eventually acquired by Hewlett-Packard.

The security of container images, of course, is at the heart of a debate over where and how containers should be deployed. Many IT organizations worry that a container infected with malware can take over an entire machine. As a result, there is a strong preference in traditional enterprise IT organizations to deploy containers on top of a virtual machine to provide more isolation. But in cloud computing environment containers are also seen as the key to increasing utilization rates. To achieve the highest level of economic efficiency it’s more attractive to deploy containers on a bare metal server. In that latter scenario, however, the operators of those clouds need to be especially vigilant about the security of the containers being deployed in a production environment.

Naturally, over time utilization rates will push more traditional enterprise IT organizations in the direction of bare metal servers. But none of that is going to happen until those organizations have complete confidence in the quality of the container images being deployed on those systems.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1615 posts and counting. See all posts by Mike Vizard