Rackspace, in collaboration with Hewlett-Packard Enterprise (HPE), has unveiled what it describes as the first managed Kubernetes service being made available using a pay-as-you-go pricing model.
Announced at the Hewlett-Packard Enterprise (HPE) Discover 2018 conference, the Kubernetes-as-a-Service offering from Rackspace is being hosted on instances of HPE infrastructure running OpenStack cloud management software. Scott Crenshaw, executive vice president for private clouds at Rackspace, says that service also will be extended to other platforms in time, including VMware, public cloud and bare-metal servers that Rackspace supports.
Earlier this week, HPE announced its Greenlake Flex Capacity pricing model, which the Rackspace Kubernetes service is leveraging to deliver a pay-as-you-service. Rackspace announced a similar offering for VMware customers.
Crenshaw says pay-as-you-go pricing models soon will become the new normal, thanks largely to the rise of public cloud services, but IT organizations don’t want to be locked into one cloud service provider. Kubernetes presents the opportunity for IT organizations to deploy workloads where they best see fit. In fact, Crenshaw contends, IT teams will take back the decision-making from developers regarding where workloads should be deployed, based on economic and corporate governance requirements. He adds that, despite the hype surrounding public clouds, it’s 40 percent less expensive on average to deploy application workloads on-premises.
Interest in managed services that can be employed to manage workloads regardless of where they are deployed is rising because organizations want to be able to deploy more workloads without increasing the total cost of computing, says Crenshaw. That often translates into more reliance on external service providers to manage the IT infrastructure, which enables the organization to devote more resources to building and continuously updating applications, he says.
Rackspace has been making a concerted effort to incorporate its managed services within the DevOps processes that more IT organizations are starting to embrace. In general, most of the Docker container use in the enterprise has been focused on lifting and shifting applications into the cloud in a way that doesn’t require them to be refactored. But as organizations begin to develop cloud-native applications using containers and Kubernetes, the complexity of the microservices architectures used to build those applications will push more organizations to rely on external expertise, largely because of a lack of Kubernetes operational expertise.
Kubernetes clearly has the potential to transform IT operations by unifying compute, storage and networking within a programmable cluster. But for many IT organizations, that unification creates as many organizational and cultural challenges as it does technical ones. In many cases, the better part of valor might be to outsource the management of Kubernetes clusters. Crenshaw notes that, given the portability of Kubernetes, it’s now much easier to bring IT infrastructure to the right place at the right time, assuming there is enough expertise available to manage all those distributed instances of Kubernetes. Rackspace is clearly betting that expertise is going to be in critically short supply.