Last week Puppet Labs released a new version of Puppet Enterprise that brought forward a number of new features, including those to Puppet Node manager that helps automate the provisioning of infrastructure from containers to bare metal, as well as a new AWS module that helps automate provisioning, configuration and management of AWS resources using Puppet. As a part of the release, Puppet announced it was officially supporting a Puppet module for Docker that has been kicking around Puppet Forge since the containerization tool went open source. With 90,000 downloads behind it and many more customers clamoring for advice on how to better automate workflows in Puppet-controlled infrastructure while using Docker, the module had gained critical mass. DevOps.com took the announcement as an opportunity to catch up with Gareth Rushgrove, senior software engineer with Puppet and author of the Docker module, to discuss containerization and how Puppet is working to help customers take better advantage of container environments.
What do you consider some of the biggest challenges when deploying containers in the real world? And how is Puppet Labs hoping to help customers get the most out of Docker?
I think the first thing is really just getting started. Deciding where to start and deciding what problem you’re going to solve. One of the nice things about Docker and containers generally is that there’s lots of different problems they could potentially solve. But one of the things that’s consistent across all of those is that if you’re going to be using Docker, you need to install Docker.
One of the things we’ve done well with the Docker module is if you already have an infrastructure that’s running Puppet and you’re thinking ‘Well, maybe I want to try out this Docker thing,’ you can pretty much write ‘include Docker’ in your existing Puppet manifest, and you now have Docker on your host , managed, installed, with servers running. You can update Docker and you can configure it very simply. So, irrespective of what you’re going to do with Docker, you’ve got a baseline to start. What we’re doing with the 3.8 release is we’re actually offering support to customers at Puppet Labs in using that module, both to install and manage Docker, but also to use some high level features as well.
So as well installing Docker and managing it and configuring it, we can also bring Docker images from the Docker Hub. We can launch individual containers and read commands within a container context. It’s basically exposing a lot of the Docker command line tool to puppet. One of the example use cases we published recently was using the module to install Docker swarm.
How would you say Docker and containerization in general is helping enterprises transform the way they deliver IT services.
I think one of the gateway use cases is speeding up and scaling tech infrastructure. So being able to isolate individual test runs within containers—you can use containers to run them much faster than you could by booting up a VM every time you’re doing a small test run.
I think that’s a key entry point. I think you can do that without changing anything about your application.
I think what happens then is people learn the tools, like what they have there, and start adopting some of the more developer-workflow friendly aspects of Docker.
And I think it’s from there that people then start looking at how to use it if the application is running in production or running services. And I think from there people will experiment. They’ll often look at maybe even running one Docker container per host, literally just using Docker as a packaging format. And I think the stepping stone from there is then to the more cluster-aware orchestration tools like Kubernetes.
I think today the ability to use it for testing infrastructure is there and it’s very simple. But the workflow stuff is, undoubtedly, what customers talk about most, especially in environments where you have lots of different applications and languages.
What do you think is next for Docker and this ecosystem in the next year or so?
I think the big thing is going to be the maturation of Docker and the tools actually running Docker in production. We’ll answer questions like, how do we monitor containers? Or how do we schedule and mange containers? Or how do we solve configuration management problems like knowing that a container that’s running is based on an image that contained an out of date version of something . We don’t yet have really good tools in the Docker space to deal with those problems like we do with Puppet outside of the containers. There are ways of using tools like Puppet to do that today, but I think what we’ll hopefully see is more container-native ways of doing that. That will happen alongside of the maturing of the scheduling tools.
I think one good thing is that Kubernetes is moving incredibly quickly. Its API feels really solid and it is based on a lot of experience from running containers at Google. The nice thing there is that API gives something potentially to hook onto. We’ve increasingly done work around Puppet to not just manage host-level resources, but to manage things at a distance with APIs. So the Amazon Web Services work we’re releasing is a good example of that. We’ve also done work around network devices. So they expose an API and we can provide a way to declaratively manage that. And I think there’s lots of potential with the high level orchestration APIs to do that.
I think flip side of that is with the Docker platform, certainly in a year from now, it will be clearer what that’s going to be. So with swarm and machine and compose in that beta stage, in a year from now they will be a lot more part of the ecosystem. They’ll be driving a lot of the conversations around those APIs.
An award-winning freelance writer, Ericka Chickowski covers information technology and business innovation. Her perspectives on business and technology have appeared in dozens of trade and consumer magazines, including Entrepreneur, Consumers Digest, Channel Insider, CIO Insight, Dark Reading and InformationWeek. She’s made it her specialty to explain in plain English how technology trends affect real people.