AWS Adds Lightweight Linux Instance to Container Services
Amazon Web Services (AWS) has added a lightweight instance of Linux it calls Bottlerocket to the container services it makes available on its public cloud.
Deepak Singh, vice president of compute services for AWS, says that beyond simply providing a way to host containers that consume less memory, Bottlerocket will make it easier for IT teams to recover from failures because the amount of time required to reconstruct the operating system using a container scheduler such as Kubernetes will be substantially less. Rather than relying on a traditional package update system, Bottlerocket makes use of an image-based model that allows for rapid and complete rollbacks of updates. That approach also makes it easier to apply updates across a fleet of instances of Bottlerocket.
AWS created Bottleneck because the cloud service provider was looking for a lighter-weight approach to running containers that would be easier for it to support and secure as the size and scope of the IT environment shrinks, says Singh. Because there are fewer operating system modules, there’s also less opportunity for configuration mistakes to be made.
Bottlerocket is based on a file system that is primarily read-only and integrity-checked at boot time via dm-verity. Accessing Bottlerocket via the SSH protocol is strongly discouraged. AWS is only making SSH available via a separate admin container that IT teams can enable as needed to troubleshoot specific issues.
Bottlerocket was designed specifically to run Docker containers or containers that comply with the Open Container Initiative (OCI) image format. Singh also notes that as open source software IT organizations can also run Bottlerockert anywhere they prefer.
In general, Singh says there are two aspects to container platforms that IT organizations tend to overlook. The first is the opportunity containers afford IT teams to reduce the footprint of the IT infrastructure stack by relying on not just a lighter-weight operating system but also open source virtual machines such as Firecracker. Legacy virtual machines have a lot of overhead required to run monolithic applications. Firecracker provides the level of isolation containers require without consuming nearly as much memory.
The second issue has more to do with processes than technology. The way containerized applications need to be managed requires organizations to define a set of best DevOps processes, says Singh. That doesn’t mean every IT organization needs to hire a site reliability engineer (SRE), but Singh says it does mean there are new tools and associated processes that the average IT administrator should take the time to learn and master.
While rival cloud service providers are certainly more competitive than they were a few short years ago, the number of containerized applications running on AWS still dwarfs all rivals combined. AWS, however, is incentivized to optimize the application experience on EKS because it’s significantly easier to move a containerized application from one cloud service provider to another or even back to an on-premises IT environment. AWS, for example, makes significant investments in its AWS Fargate service that enables organizations to deploy containers without having to manage the underlying container infrastructure.
The number of containerized applications that have moved from one cloud platform to another is still relatively slight. However, many of the containerized applications being built on top of the AWS cloud often wind being deployed somewhere else. Obviously, AWS would prefer it if more of the applications being developed on its cloud stayed on its cloud once they are deployed in a production environment.