Is This the Beginning of the End for OpenStack?

I was struck by a conversation I had earlier this year during the OpenStack conference in Austin with a technical architect from one of the bigger players. He was seeing baffled IT teams who had OpenStack clouds in which the users (developers) were not spinning virtual machines (VMs) up and down as expected. They were just deploying a bunch of VMs and then leaving them running for long periods. When the IT folks investigated, they found the VMs were Docker host VMs and the developers were now deploying everything as containers. There was a lot of dynamic app deployment going on, just not at the VM level.

Then recently Mirantis announced that it would be porting the OpenStack IaaS platform to run as containers, scheduled (orchestrated) by Kubernetes, and making it easy to install Kubernetes using the Fuel provisioning tool. This is a big shift in focus for the OpenStack consulting firm, as it aims to become a top Kubernetes committer.

OpenStack, containers and Kubernetes all exist for a singular purpose: to make it faster and easier to build and deploy cloud-native software. It’s vital to pay attention to the needs of the people who build and deploy software inside enterprises, OpenStack’s sweet spot today.

Questioning OpenStack’s Relevancy

If I put myself in a developer’s shoes, I am not sure why I care about spinning up VMs and, hence, OpenStack. Docker containers came along and made packaging and deploying microservices much easier than deploying into VMs. And there’s now a strong ecosystem around container technology to fill the gaps, extend its capabilities and make the whole thing deployable in production. The result has been phenomenal growth in container usage in a very short amount of time. The remaining operational problem for the average enterprise is deployment of its container stack of choice onto bare metal or its existing hypervisor, which it can do today with tools it already has, such as Puppet/Chef/Salt or, in the future, using Fuel.

Of course, this focuses on the developers working on new stuff or refactoring apps. Container penetration is small relative to the mass of existing systems, as lots of things are not in containers today and will be happily uncontained for years to come. So, there’s obviously still a need for VMs. Is that why OpenStack still matters?

Problem one is that OpenStack initially was a platform to arm service providers to compete with AWS, and when that didn’t pan out, refocused on being the infrastructure as a service (IaaS) for new apps. There was a time when it was hard to read an article about OpenStack without hearing about “pets vs. cattle,” and OpenStack was designed to herd cattle. That was the reason to deploy it, even if you already had vSphere or Hyper-V with automation. It was tough to migrate existing virtualized apps to OpenStack without changes.

Problem two is that OpenStack itself is a large and complex collection of software to deploy. It has itself become a big, complex pet, which is why Mirantis and others can make a living providing services, software and training. So an OpenStack deployment looks like a non-trivial cost and time investment—not to enable the exciting cloud-native new stuff, but the stuff that is already running just fine elsewhere in the data center. That’s a tough sell.

That’s why I question the future of OpenStack.

This is not to say that organizations with OpenStack somehow made a mistake: Giving their users on-demand cloud app environments is a good call. However, if they were making the same decisions today, those enterprises would need to think very hard about what their developers and DevOps teams would prefer: a dynamic container environment perhaps based on Fuel, Docker and Kubernetes—on-premises or in a public cloud—versus an on-prem private IaaS such as OpenStack.

Tough times ahead.

About the Author/Mathew Lodge

VMW_MathewLodge (2)Mathew Lodge is Chief Operating Officer at Weaveworks Inc. and was previously vice president in VMware’s Cloud Services group. Mathew has 20+ years’ diverse experience in cloud computing and product leadership. He has built compilers and distributed systems for projects including the International Space Station, helped connect six countries to the internet for the first time, and managed a $630 million router product line at Cisco. Prior to VMware, Mathew was senior director at Symantec in its $1 billion-plus information management group.

3 thoughts on “Is This the Beginning of the End for OpenStack?

  • Pingback:

  • In my opinion, big data and cloud is the future of technology and it is said that OpenStack is the biggest open source platform. It’s been funded and contributed by IT gaints like Google, Dell and many more. Google has its own Cloud, but still it’s contributing to OpenStack.

    There is strong potential for Enterprise-class capabilities with OpenStack. There is rapid growth in developing New Components ( new features ) in OpenStack. The flexibility is one of the biggest assets for companies using OpenStack. Many online OpenStack tutorials are available to learn more about it. No lock-in to Service providers. Its Opensource, which makes companies with budgets issues tend to switch to OpenStack. Finally if cloud computing is the future, OpenStack will play major role.

    • I think you miss the authors point: you list the qualities of openstack, but you forget to compare openstack with containers. In the openstack (VM) vs containers war, containers have the advantage in my opinion.

Comments are closed.