October 26, 2016

Microservices is an umbrella term that covers a range of things—the most notable and obvious of which is containers. Containers, driven primarily by the rise of Docker, are now embraced and supported across a variety of operating systems and cloud platforms. The next step in the microservices evolution, however, is to eliminate those dependencies completely—or at least most of them—and move to serverless applications that run more or less natively in the cloud.

Al Hilwa, program director of software development research for IDC, told me late last year, “Microservices are typically developed with modern elastic and often stateless back-end architectures, but that does not mean they are automatically scalable. Architects have to take special care to make sure that centralized services or databases are also designed to be scalable. Microservices also put a lot of pressure on APIs highlighting the importance of strong API management technology in the software stack being employed.”

There has been a proliferation recently of services aimed at taking microservices to the next level and supporting a serverless application ecosystem. Amazon’s AWS Lambda and API Gateway, Google Cloud Functions, and Azure Container Service (ACS) are all built on the premise of providing a generic layer capable of running a container orchestration solution.

The whole thing reminds me a little of the quest for bare metal solutions with data backup and restoration. When I worked as a network admin in the IT trenches we dutifully backed up all of our crucial systems and data on a daily, weekly, and monthly basis so we could be prepared in the event that some catastrophe occurred. The real challenge, however, was that most backup solutions at the time were only really capable of restoring the data onto identical hardware. Any deviation in hardware architecture, graphics card, network adapter, etc. could complicate the whole process—or even render the backup more or less useless.

Some refer to serverless computing as “NoOps” because it essentially eliminates the operational infrastructure aspect of DevOps. “Developers follow the fire-and-forget paradigm where they upload individual code snippets that are hooked to a variety of events at runtime,” explains a Forbes contributor. “This model offers a low-touch, no-friction deployment mechanism without any administrative overhead. Serverless computing and microservices are ushering a new form of web-scale computing.

Serverless application code is built on small, single-purpose functions that can be triggered by cloud events. There is no need to launch or manage a virtual server, or maintain a runtime environment because the serverless application code just runs directly on the supporting platform.

While this is a different scenario, some of the core elements are the same. Ultimately, it’s about being able to design, execute, and maintain apps without regard for the hardware architecture or operating system platform they will run on. It is more complicated than that, but one of the main benefits of serverless applications is to separate the code from the underlying infrastructure.

It’s possible that containers as a standalone concept will be a shooting star that fades quickly. With the help of serverless computing, microservices can evolve beyond containers and provide a more effective and efficient platform for developing and executing code. What do you think?

Tony Bradley is Community Manager for Tenable Network Security and Editor-in-Chief of TechSpective. Tony has a passion for technology and gadgets--with a focus on Microsoft and security. He also loves spending time with his family and likes to think he enjoys reading and golf even though he never finds the time for either.

  • Certainly a very important tool in the tool box, and one I’m very excited about, see also my talk at FOSDEM earlier this year: https://speakerdeck.com/mhausenblas/from-pets-to-cattle-to-flock-of-birds

    Two thoughts: containers are still necessary in this context to implement serverless offerings properly (isolation, resource constraints, etc.) and also let’s openly talk about limitations, esp. around execution length, stateful services and cascading.

    I’d encourage people to participate in the discussion, since we’re shaping this space as we speak: be part of the community and come to http://serverlessconf.io/ to learn more and share your experience.