CoreOS Launches “Stacknetes” Initiative at OpenStack Summit

The promised integration of the Kubernetes container orchestration framework and OpenStack got underway this week at the OpenStack Summit Austin 2016 conference with the launch of a “Stacknetes” initiative by CoreOS.

Launched 30 days after CoreOS, Intel and Mirantis announced that they are collaborating on porting the OpenStack cloud management framework to Kubernetes, CoreOS Alex Polvi conducted a live demo of Stacknetes at the OpenStack Summit event.

The goal is to be able create a self-healing instance of OpenStack that relies on Kubernetes to automatically spin up OpenStack components any time they fail in a matter of minutes.

Running on the Tectonic distribution of Kubernetes that CoreOS makes available, “Stacknetes” is a prototype of a capability that will become a standard OpenStack capability in time. Once that work is completed Polvi says IT organizations will be able to more easily replicate the IT environment Google uses to run its IT operations within their own environments. Known as Google’s Infrastructure for Everyone Else (GIFEE), Polvi says is to move all the hypervisor, networking and management plane functions that make up the data center into pods of containers that can be more easily managed.

Joining Polvi on stage at OpenStack Summit was Craig McLuckie, Cloud Native Computing Foundation Chair and Google Group Product Manager, who says Google is committed to integrating Kubernetes with a number of adjacent port of call technology platforms. OpenStack as it happens is the first of those ports of call, says McLuckie.

When all these Kubernetes capabilities will actually be available in production OpenStack environments is still larger unknown. Kamesh Pemmaraju, vice president of product marketing for Mirantis, says it may take as long as another nine to 12 months. The most significant thing about Kubernetes support is that it allows OpenStack to function as an integration engine across virtual machines, bare metal servers and containers. While containers may be all the rage these days, Pemmaraju notes that application workloads that depend on virtual machines are not going away any time soon.

In the meantime, Pemmaraju says OpenStack will become more robust and even scale higher once the OpenStack management plane itself runs in a container. Right now there are lot of OpenStack management dependencies that wind up being hard coded in a way that make ensuring, for example, high availability a major challenge, says Pemmaraju.

Less clear is to what degree existing workloads running on platforms such as VMware will migrate to OpenStack. Initially positioned as a management platform for cloud native applications there’s clearly a growing movement towards moving both new and legacy applications on to OpenStack. While there still much work to be done to make OpenStack as robust a platform as VMware in time many IT organizations would like to be able to rationalize their platforms using an open source platform that doesn’t require them to pay commercial licensing fees.

Of course, there are other approaches to managing containers to achieve that goal. But in terms of size of the community surrounding Kubernetes there is a level of momentum that other open source projects at the moment can’t match. At the other end of the spectrum, many of those rival container orchestration frameworks are both faster and easier for the average IT organization to master. As such, the battle for container management supremacy in the enterprise is still in its infancy.

Mike Vizard

Mike Vizard is a seasoned IT journalist with over 25 years of experience. He also contributed to IT Business Edge, Channel Insider, Baseline and a variety of other IT titles. Previously, Vizard was the editorial director for Ziff-Davis Enterprise as well as Editor-in-Chief for CRN and InfoWorld.

Mike Vizard has 1615 posts and counting. See all posts by Mike Vizard