Microservices and cloud-native development promise agility, resilience, organizational alignment and improved development-to-delivery cycles. So, it is no surprise that everyone is trying to hop on board as quickly as possible. This rush has led us down the now-familiar path of containerizing everything and then layering on a service mesh to connect things together. But, as these initiatives mature and grow in size, we’re beginning to see the downsides of this infrastructure-heavy approach in terms of both significant increases in complexity as well as a troubling degradation in performance (an IBM research report concluded the performance of the microservice model can be 79.1% lower than the monolithic model). So much so that some are starting to question whether it’s worth it.
While the march toward microservices and cloud-native applications is inexorable, thankfully the path to get there isn’t set in stone. I believe we’ll see other, less constraining paths emerge. Applying primarily to new application development, these will be based on new programming and deployment models helped along by new, emerging technologies.
Let’s take a deeper look.
Where We Are Now
For those charged with transforming legacy infrastructure, the natural first instinct is to look at current code, legacy, on-prem applications and ask, How do I make this cloud-native? Or, more precisely, How do I as quickly and painlessly as possible transition my monolithic applications into a distributed, microservices-based architecture?
Of course, there is little to no appetite for rewriting anything. So, the question then becomes, Is it possible to get to a microservices architecture and cloud-native deployment without touching code? This is the ‘no-touch’ constraint … and it’s a tough ask.
Containers, Orchestration, Proxies to the Rescue
Naturally, our industry has responded with a slate of innovative solutions to meet the challenge. These include:
- Containers to help disaggregate and deploy existing code across the network.
- Kubernetes to help manage and configure the proliferation of these containers.
- Proxies (side cars, ingress, egress, etc.) to provide features needed by distributed systems (connectivity, lookup, security, load-balancing, etc.) added extrinsically and adjacent to code originally written without regard to such things (because they were not needed).
- ‘Service mesh’ control planes to administer all of the above.
And there we have it, we can now take our current Web 2.0/REST-style applications and re-deploy them to resemble a truly distributed architecture … microservices! It’s a remarkable outcome and supported by a fervent vendor and open source project landscape that includes Envoy, Kubernetes and a grab bag of service mesh control planes including Istio, Linkerd, Maesh and Kuma, to name a few.
I’m told there has never been a free lunch in recorded history, and I’m afraid this situation is no exception. Distributed programming, deployment and architectures are inherently complex. And complexity in a system does not vanish, it can just be moved around. So, while we have successfully isolated our current code and (RESTful) programming model from this complexity, we have done so by shifting complexity into the infrastructure layer.
Predictably, the result is an explosion in configuration, dependencies, instance proliferation, NxM matrices of point-to-point connections to be secured, proxies, reverse proxies, circuit breakers, bulkheads … on and on. Too often, this compromises both resiliency and agility in the system as a whole. And when it comes to performance (throughput, latency, etc), the story gets dramatically worse.
The Current Path, and its Trade-offs
The inevitable question becomes, Is it all worth it? Are we just replacing one set of problems (monolithic, unwieldy architecture) with new ones (complexity, degraded performance)? Is the no-touch constraint on code and the programming model worth the added infrastructure complexity?
The answer—as always seems to be the case—is that it depends. Many organizations are constrained by the ‘no-touch’ mandate on their applications for good reasons—legacy stacks, codebases no longer under development, immutable architecture (by fiat or necessity), lack of expertise, etc. In these cases, letting the infrastructure team hermetically seal the code into containers and then connect everything together via proxy is often the only good option. In such cases, taking on infrastructural complexity and performance issues might be worth it.
Another Path – Where We Go From Here
What if ‘no-touch’ wasn’t a hard and fast constraint? What if the application and the architecture was under active development and could admit some change? What if there were new, greenfield projects to take on? In these cases, our hands are no longer tied and an exciting new set of options open.
The freedom to use a programming model tailored to network-centric applications paired with an infrastructure stack designed specifically for distributed computing should allow us to spread the inherent complexity across business logic, application frameworks, infrastructure and communication protocols. In principle, this should give us a more balanced architecture for microservices and cloud-native development.
As we’ve gained more experience with distributed systems, microservices and cloud-native development over the past few years, a lot of what we’ll need going forward has started coming into focus. For example, the reactive systems model is rapidly gaining mind share. It calls for a coherent architecture for building responsive, resilient, elastic and message-driven systems. These are the exact attributes we are looking for in a native distributed system architecture.
Or, something we’ve been working on here at Netifi: RSocket, a bi-directional, multiplexed, message-based, application-level (strictly layer 5/6) open source networking protocol that implements the reactive programming model for microservices communication over the network. With built-in features such as flow control, backpressure and bi-directional streaming interactions complete with cancellation and resumption, it can dramatically simplify the development and deployment when writing distributed systems.
Open source RSocket also happens to be the key underpinning technology of the Reactive Foundation, an exciting, new industrywide initiative that aims to accelerate technologies for building the next generation of networked applications.
There are many other examples. But it’s clear that our next 5- to 10-year architectures will take advantage of new programming models, technologies and industry initiatives rather than being limited to retrofitting existing architecture.
Two Paths to the Cloud-Native Transition
For the foreseeable future, I believe we’ll see two parallel paths in the transformation to microservices and cloud-native apps. One path for existing applications that are constrained by the no-touch code requirement and rigid/fixed architecture. These will be well-served by the service-mesh style ecosystem we are currently using.
The other path, primarily for new applications, will be deploying new programming models and technologies that will be used widely to build cloud-native applications using the microservices model.
These two paths will very likely co-exist in your organization. The question to consider is how best to bridge between the two and evolve at the right pace for your organization.