Arm and Docker Inc. have launched a joint effort that will make it easier for developers using Docker Desktop tools to build and deploy containerized applications on systems based on Arm processors.
Kevin Ryan, senior director of software ecosystem solutions for the Infrastructure Line of Business at Arm, says thanks to the tighter relationship between the two companies, organizations now will find it easier to build modern microservices-based applications that can be deployed in internet of things (IoT) environments. There are already 2 million developers who are familiar with Docker Desktop, so the number of developers that potentially can build an IoT application on Arm processors has now increased substantially.
The two companies are focusing their efforts on optimizing Docker-based tools and platforms to run on the Arm Neoverse processors, which includes a Neoverse N1 CPU that Arm claims can run cloud-native applications more efficiently. Those efforts will address myriad things ranging from end-to-end management of a product life cycle from development, to unified development environments for heterogeneous compute platforms deployed in the cloud, at the network edge and in an on-premises data center.
At the same time, the two companies are committing to providing an instance of the Docker Enterprise Engine for Amazon EC2 A1 instances based on AWS Graviton Processors, 64-bit Arm Neoverse cores and custom silicon designed by AWS. Arm claims those instances will enable IT organizations to achieve cost savings of up to 45% when running containerized applications on Arm.
Arm envisions developers first will build a simulation of a containerized application running on a public cloud such as Amazon Web Services (AWS) before deploying them on IoT endpoints and gateways. In some cases, those applications will be deployed on virtual machines, while in other cases due to memory constraints there may be no practical alternative to deploying those applications on bare-metal platforms. In many cases, those cloud applications should be able to run unmodified on other platforms such as a lightweight distribution of Kubernetes deployed on an IoT gateway, but there may be some for which there will be a need to recompile an application to generate Arm executables.
Many IT teams may be underestimating how challenging building and deploying highly distributed IoT applications may be. In addition to sharing data with cloud platforms and accessing application programming interfaces, endpoints and gateways in many cases also will need to share data and invoke APIs directly with one another. Those requirements create data flows traveling in both north-south and east-west manners within the IoT application.
More challenging still, many of those IoT applications will be updated frequently as developers rip and replace containers to add new functionality or address a cybersecurity vulnerability. It may take operations teams that historically have controlled deployment of applications on embedded systems some time to implement the processes required to support applications based on a microservices architecture.
Regardless of the path chosen, there’s no doubt large numbers of IoT applications will be built using containers. In fact, there may come a day soon when there are a lot more containers running on the IoT edge than there are containers running in the cloud.