CoreOS CEO Alex Polvi says the “self-driving” implementation of Kubernetes, dubbed CoreOS Tectonic, will make use of the same automated updating processes that CoreOS employs in Container Linux, its distribution of Linux. The goal is to make it easier for IT operations teams to ensure they are running the latest and most secure instances of Kubernetes and Linux, says Polvi.
Of course, Polvi notes, IT organizations will not be required to turn on automatic updates, given the fact that many of them prefer to manage updates at their own pace. But he says those IT organizations that do opt to make use of automatic updates will be better able to cope with the rapid rate of innovation that characterizes those DevOps environments making extensive use of containers.
IT organizations also are able to roll back any of those updates with a push of a button, he says, making it possible for IT organizations to respond quickly anytime an update causes an application to break.
Other new CoreOS Tectonic capabilities include installers that make it simpler to deploy Kubernetes, a console to visually inspect that deployment and hooks into a variety of third-party security frameworks.
CoreOS Tectonic is now available at no cost on up to 10 nodes.
In general, Polvi says CoreOS is moving to “eliminate the fire drills” that IT organization experience every time there is an update to a core piece of infrastructure. On the one hand, IT organizations clearly want to reduce the total cost of operating an IT environment. The less time IT staff spends manually updating IT infrastructure, the more time there is for them to add value to the business elsewhere. In fact, all the IT specialists needed to manage IT environments today soon might give way to fewer IT generalists relying on IT automation frameworks to manage IT environments at scale. In addition, Polvi notes that version numbers applied to software one day soon might become obsolete, because new features will be slipstreamed into software as part of continuous series of updates.
But as promising as that seems, the typical IT environment is a mass of application interdependencies that cause applications to break when updates to one part of the IT environment are not reflected in another. After successive waves of IT administrators, many IT organizations are not even 100 percent sure where all those interdependencies exist. The result often is a lot of trial and error that can lead to disruptions to a business that depends on those applications always being available.
The degree to which any IT organization is advanced enough in its DevOps processes to make that shift naturally will vary. But the one thing for certain is that every IT organization would prefer to spend a lot more time on building and deploying applications than tending to the IT infrastructure on which those applications depend.